Search

Sunday, 21 February 2016

The Complete Unified Trigger Framework

It has been a while since I wrote a blog for my readers but Ive been working hard in developing a framework to improve your organisations in a big way.
I decided to turn my attention to triggers. There are many well published trigger frameworks and all have merits in that they improve the manageability of code, correctly orders execution, however as with everything good technologists will continually evolve a model. I was impressed with Tony Scotts http://developer.force.com/cookbook/recipe/trigger-pattern-for-tidy-streamlined-bulkified-triggers pattern as it simplifies the trigger.
Independent to Hari Krishnan https://krishhari.wordpress.com/tag/apex-trigger-design-pattern/ I too noticed some room for improvement because Tony’s framework would require continual adaption of the TriggerFactory for every new trigger that is developed. The solution that I came up with was basically the same as Hari.
However, I was concerned that all frameworks to date have only been designed to solve the old problem of code manageability and order of execution, but I always incorporated far more into my frameworks, notably the following additional features:

1.      Trigger Control
2.      Monitoring
3.      DML Consolidation


We will later explore these 3 facets of the framework in more detail. Lets first of all have an overview of the building blocks of the framework


The classes in Red make up the baseline of the framework. These classes do not need to be changed. The classes in Blue are classes that will be created for each Trigger. You can create as many "Logic" classes as you wish depending on the number of separate business areas and complexity of codebase in your organisation. The "Account Helper" class is also optional, this just aids the Logic classes and provides better modularised code.
Of course we must have  the starting Trigger as well, shown as "AllAccountTrigger" above.

In the next blog I will go into details of each class.

Friday, 19 February 2016

Watch out for Time Based Workflows

Time Based Workflows are very useful to action something in the future, which saves you writing scheduled jobs. But be careful, very careful.
I will tell you a story. I had a very critical Time Based Workflow which if it didnt fire as expected it would severely affect revenue, but it had always run very smoothly so there was no expectation of that changing.
Well we ended up creating a number of Time Based Workflows and typically each record would fire about 6 at different times. But since orders are coming in from many agents there were a lot of workflows firing. Unfortunately we not only hit our limit of 1000 per hour, but a large number were queueing up to enter the 1000 queue and so we could only see in monitoring section the same orders queueing and we thought that the Time Based Workflow had broken somehow.
So, if you want to control when your actions will fire relative to say the creation or update of a record of course Time Based Workflow are ideal, but if the queue is clogged your actions wont fire as expected. So what do you do?
You can take various actions, I will try to provide most cost effective methods to avoid coding which will be expensive:
  1. Create a scheduled report to keep a track of how many records you expect will be in the queue. This will match the criteria clause of your Time Based Workflow. If the report shows there are too many queued make sure you have a script that manually process any remaining records.
  2. Carefully calculate how many you expect to be entering the queue at any specific time so you can plan if you will be well within your limits or not. If you expect the limit to be broken simply put in a business case to Salesforce for this to be increased. They will listen
  3. If its not 100% necessary for all actions to occur at specific times relative to the workflow criteria then you can create a scheduled job to process any other records

Sunday, 15 November 2015

How Should a Company Decide What Projects To Do

This could be achieved simply from having discussions between fellow employees, but as more people become involved reaching consensus is often difficult.

Or to a reach a consensus through analysis of ROI. However to derive accurate ROI you will need to produce accurate estimates of work to deliver a project and derive expected monetary benefits from projects, both of which will include assumptions. Such assumptions need to be assessed for their reliability and are themselves difficult to reach a consensus opinion.

Often companies attempt to run with the former approach because it is easy to implement, with varying degrees of success. In the early years of a company the leadership of the company can often have enough knowledge of every aspect of the company to make informed decisions of which projects to warrant. However as the company grows this becomes increasingly difficult and so keeping with same model brings less success. Companies eventually conclude that they must embrace the latter approach of a more scientific analysis of ROI etc.

However the difficulty to transition to such a model is huge.
A company must first estimate ROI, many factors can influence this making it difficult to estimate accurately. Such influencers include, how much will a company make from a new product when released and this can be influenced in turn by many factors such as how well a product is received by customers.

Once an ROI is estimated, the team needs to also estimate how much work is required to delivery the product. Companies and people vary on how accurate they are at estimating this, often caused by many influences such as employee turnover, or the business changing direction etc.

Lets say a company can fairly accurately estimate both ROI and the required work effort for ALL proposed projects. The business also needs to decide how it should allocate money to individual areas of the company. Lets say this is achieved as well, we now need to decide what work to do. Should this be solely based on the difference between the cost and ROI as a ratio. Well if 1 project will cost such a huge percentage of the overall budget many parts of the company will be neglected which will be detrimental to those parts of the company and good people may leave the company resulting in a long term sharper decline in ROI. Also if reputation building projects are declined in father of other higher ROI projects in the long term this can also have a huge negative affect, as the retention rate of customers drops. Then there are projects that can lead to the loss of important accreditations such as ISO, or projects which avoid the company receiving fines, or projects that are enablers for future expansion but provide little or no ROI now, and projects that reduces risk for the company but produces no ROI such as producing data backups of systems.

As you can see even if a company moves to a more ROI decision based model a company still needs to make many cognitive decisions that are not based on scientific analysis of numbers, and in actual fact the number of overall decisions can escalate.

Can a company even remove these decisions from the process. It is certainly possible if you decide how much of a percentage of the overall budget a project can consume, use some mathematical hypotheses testing of the assumptions, decide if you want to spread projects throughout the company rather than concentrating the budget on a select few projects; if you spread budgets throughout the company the approach of doing this affectively must be decided, either based on department size or department importance to the company or a mixture.

Dividing projects into logical groups can allow the company to select a diverse set of projects to provide benefit in various ways and not simply to solely focus on ROI, such as:

1. Positive ROI projects
2. Reputation building projects
3. Accreditation projects
4. Employee well being projects
5. Business risk reducing projects
6. Future positioning projects

As a company you can decide on percentage weightings of importance of the above categories.

Then you also need to decide on the spread of budgets to the departments, broadly speaking split into the following categories, but this will vary per company:

1. Finance
2. IT
3. Sales and Marketing
4. HR
5. Property
6. Legal
7. Customer Service
8. Media

Each of the above are usually sub-divided many times

Now you can start deciding what budgets can be provided to teams, making sure that the projects decided produce the even distribution of budget for the 6 type of projects outlined above.

Once you have run a year long or even longer sequence of releases analyse if the ROI produced is as expected and then further refine your model based on this input.

Since following the ROI approach requires a huge amount of analysis it requires a lot of resource to effectively derive to reliable decision making, consequently the cost of which can often prohibit smaller companies adopting this approach. And a company must decide during its natural evolution what is the appropriate stage when it should transgress to an ROI approach.

Also since the ROI approach is vastly more complex than the approach of simply trusting in the leadership to make the right decisions, and unless every aspect outlined above is scrutinised and analysed to a minute detail, for all proposed projects, the effectiveness of this decision approach is undermined. In summary if a company follows the ROI approach it must do it very well to make it effective.

The drawback of following this model of giving autonomy to departments to decide where and how to spend their budget is employees rarely consider that they will remain with their company in 5 or 10 years, so following this model employees will naturally have a more short term viewpoint when deciding where and how to spend their departments' budget.

There is however another model that a company can follow which takes a more democratic approach, taking everyone's voting of vision cards ( basic blueprints of ideas ). Only issues with this model is that very good ideas may not get supported and can become more of an employee popularity contest than an idea contest. Secondly, employees may simply not understand all vision cards and the benefits because a finance executive will have little understanding of the issues faced in IT and so how can that person vote on ideas presented by IT; of course you can limit what employees can vote on; but this then really drifts towards the 2nd model rather than this model. Thirdly, this model doesn't address employees having a more short term view, and only the 1st model addresses this.

What I am demonstrating here is that the simple decision of what should a company do is never simple, but actually is the most important decisions that a company makes, and inherently the approach it takes to make these decisions is crucial to the success of the company. And importantly the approach should be continually assessed and improved per year.



Sunday, 25 October 2015

A Useful Winter 16 Function

Not many people will notice a small function in the winter 16 release which has potential to help the performance considerable of the entire platform, if we all use this function wisely.

System.SObject Class

recalculateFormulas()
Recalculates all formula fields on an sObject, and sets updated field values. Rather than inserting or updating objects each time you want to test changes to your formula logic, call this method and inspect your new field values. Then make further logic
changes as needed.

For example :

You want to insert an Account in a testmethod and you are wanting to test that your formulas will be calculated correctly. Previously you would have to perform a DML. And we all know how expensive DMLs are for the platform. This little formula bypasses the need to do the DML.

Say your Account is quite basic and has several formula fields.



Account acc = new Account(Name='Steves Test');

//Now test a formula field StevesFormula__c to have the "This is a test" as the value without doing a DML

acc. recalculateFormulas();

system.assert(acc.StevesFormula__c == ' This is a test');




Saturday, 24 October 2015

The New World Of Debugging



I cannot begin to describe how Im feeling. Im just so excited. Have you seen the new debugging capabilities in Eclipse and the Developer Console. If you havent stop what you are doing now. If you are drinking a nice bottle of Moet, or you are digging into some nice chocolate cake. Stop! Open up Salesforce and have a look.

But is this exciting, is this thrilling, well for some it isnt, but for me god damn it is. Why?

With these tools you will be able to develop faster and so release faster and so satisfy your stakeholders and keep them happy.


You can now do the following:

1.    You can run individual test methods in a test class

           
            You can now select individual test methods from your test classes to
include in a run. You can also choose whether to run tests synchronously, and you can rerun only the failed tests


Oh I was 1 of the people suggesting this many years ago on IdeasExchange

2.    If you have hit debugging levels regardless of what logging level you set, you can now start your debugging at a specific point in your code to prevent this


Trace flags now include a customizable duration. You can also reuse debug levels across trace flags and control which debug logs to generate more easily than ever before. This feature is available in both Lightning Experience and Salesforce Classic. A debug level is a set of log levels for debug log categories: Database, Workflow, Validation, and so on. A trace flag includes a debug level, a start time, an end time, and a log type. The log types are DEVELOPER_LOG, USER_DEBUG, and CLASS_TRACING. When you open the Developer Console, it sets a DEVELOPER_LOG trace flag to log your activities. USER_DEBUG trace flags cause logging of an individual user’s activities. CLASS_TRACING trace flags override logging levels for Apex classes and triggers, but don’t generate logs.

Debug > Change Log Levels

3.    Of course there are other features you should check out. Such as all the Analysis features, go to 

Debug > Switch Perspective > Analysis


You can check any limits that you may be approaching.
You can check how long it takes to run certain functions and what actions occur when during execution.
You can see the order of execution in a tree diagram and other various ways
You can trace variables as they change in your code

4.    Eclipse debugging


Use the Apex Debugger to complete the following actions.

• Set breakpoints in Apex classes and triggers.
• View variables, including sObject types, collections, and Apex System types. • View the call stack, including triggers activated by Apex Data Manipulation Language (DML), method-to-method calls, and variables.
• Interact with global classes, exceptions, and triggers from your installed managed packages. When you inspect objects that have managed types that aren’t visible to you, only global variables are displayed in the variable inspection pane.
 • Complete standard debugging actions, including step into, over, and out, and run to breakpoint.
• Output your results to the Console window.






Saturday, 3 October 2015

The Importance Of Estimating Requirements

The Importance Of Estimating Requirements 

I havent been blogging for a while mainly because Ive been doing some DIY work in my house, so although my blogging and my readers have suffered my kitchen is looking much better .
In this blog Id like to talk about Estimation, something developers dont like much.
Estimating requirements and estimating accurately is more important than most developers think it is. Most think it is just another administration task that stops them developing, but without it companies struggle to operate correctly.
There are different types of estimating such as using story points http://scrummethodology.com/scrum-effort-estimation-and-story-points/. Or using estimating by time.
Personally I suggest it doesnt really matter which method you chose to estimate stories. Remember a story at this stage has the basic outline of the work and not the detail, so the estimate is a very approximate one.

But if I were to chose a method I would chose estimate by time. The reasons are, time is a universally known gauge and doesnt need to be calibrated; when new members enter your team with story points they need to be taught what your story base point is, whereas with time you dont; if you have more than 1 team in your company each team may have a different story base point and so if you move staff around teams this can be confusing for the team members and lead to inaccuracies. Another benefit of using time is that this can be used to calculate forecasted budgets much easier, whereas if you use story points you first need to translate this into its equivalent time then to work out the forecasted budgets. Of course you could argue if you are working on a set sprint length of say 2 weeks and you can complete 5 story points per person in that 2 weeks then this is the only translation into time that you need.
As it comes closer to the project development start date more finer detail of the requirements are gathered and the stories are broken down into small individual tasks.
Some teams believe they only need to refine the story points they gave at the beginning and then calculate how many stories they can fit into a sprint, based on the priority of the stories.
I agree on the overall concept of this but I believe the individual tasks should be sized themselves. The only issue here if you use story points you can a scenario where you have 0.1 story points and so this undermines the value of using story points on Tasks of the Stories.

Many teams dont bother entering their actual time spent on Tasks or Stories. Is it really required if you say you are going to deliver 15 Story points in a 2 week sprint and that is exactly what you do deliver, does it really matter if you log your actual time. Well I would argue it does.
Say for example you have 2 Stories and say you use time to size Stories, if you estimate that both Story 1 and 2 will take 1 week each to complete, but in reality Story 1 took just 2 days to complete but Story 2 took 8 days. Both Stories were still completed exactly on time that was estimated, but actually in reality the team is very bad at estimating and this should be improved.
In the next sprint the team could get it very wrong and grossly under-estimate both Stories and only deliver 1 of them.
The trade-off however is the extra administration time required to enter actual time worked.
So on balance I would suggest use time to estimate both Stories and Tasks. Start with entering Actual time until you prove the accuracy of your estimating at both the Story and Task level. Once you prove a consistently high % accuracy level across all team members you can remove the extra administration required to log actual time. Of course if new your team changes considerably you may need to restart the actual time logging for a period.

Sunday, 16 August 2015

A Generic Recursive Runtime Decision Making Batch Class


Previously you could only execute 5 batches jobs from any single context
But one of my ideas on IdeasExchange was included in a recent Salesforce Release now you execute up to 100 batches which become queued in the AsyncApexJob object

What I'd like to cover in this blog, a generic recursive runtime decision making batch class.
We will make a class that requires little change and can serve and batch processing for any batch.
There are situations whereby once a batch has fully executed you want to initiate another batch:

1.      The first batch executes as many operations as it can and then initiates a decision process that either executes the same batch process again or ends the executions

            Situations where this scenario can be used:
a.       A callout to a 3rd party system and you dont know how many records exist in the 3rd party system


2.      After the first batch executes, records are set into a state that now allows a different batch to execute. Of course the second batch could be scheduled for a certain time but there is no way of knowing when the 1st batch will complete and so you have to space batches apart. If it is important to complete the operations in a timely fashion you will want to execute the 2nd batch immediately when the 1st batch completes

            Situations where this scenario can be used:
a.       The 1st batch updates a field on the Account which fires a trigger and workflows. This sets conditions on say the Contact object by updating various fields. The 2nd batch now picks up records on the Contact where this field has been updated. So we need the 1st batch to complete for the 2nd to process.


Lets consider a situation where a batch makes a call to a 3rd party system requesting for a number of records, but due to payload limitations the 3rd party can only return a certain number of records and the 3rd party doesn't provide a means of identifying how many records there are in the 3rd party because such a call drains system resources on the 3rd party.
So we need to setup a batch class that makes a call to the 3rd party and retrieves X number of records. When the batch falls into the Finish() we call a decision method which identifies how many records were processed which tells us if we have processed the last batch or not.






public with sharing class Constants {
            public static final string CONST_DOWNLOAD = 'DOWNLOAD 3rd PARTY';
}






global class batchProcess implements Database.Batchable<sObject>, Database.Stateful, Database.AllowsCallouts{
            global integer mx;                               //number of records to process
            global String batchType;                     //identifies which batch processing to call
            global String soql;                               //the soql query if the batch is to make a query to feed records into the Execute()
            global Map<String,String> vars;         //this holds any arguments that are to be passed to the batch function in the Execute()
            global Boolean success;                      //determines if the last batch execution was successful, if it wasnt we might decide to stop any further batch processing since there is a possibly a fault has been encountered

            global batchDownloadCurrentGlobals(String thisbatchType){
                        batchType = thisbatchType;
            }
           
            global batchProcessing (String thisbatchType, Map<String,String> thisvars, String thissoql){
                        batchType = thisbatchType;
                        vars = thisvars;
                        soql = thissoql;
            }
           
            global Database.QueryLocator start(Database.BatchableContext bc) {
                        if (soql == null || soql == '')
                                    return Database.getQueryLocator('Select id From User limit 1');                        else
                                    return Database.getQueryLocator(soql);
            }
           
            global void execute(Database.BatchableContext BC, List<sObject> glbs){
                                    if (batchType == Constants.CONST_DOWNLOAD){//identifies the batch type we are calling, for a different batch you simply introduce another if statement                           
                                                if (vars.containskey('Max') && vars.get('Max') != '0'){
                                                            List<ApexClass> apxCls = (List<ApexClass>)glbs;
                                                            String maxCls = vars.get('Max');
                                                            mx = integer.valueof(maxCls)-1;       

                                    //call method that retrieves "mx" number of records from the 3rd party, if the callout can be made and is successful this returns true to "success". You could introduce a for loop here to make the callout a maximum of 10 times to reduce on the number of batch operations
                                    success = Utils.retrieveData(mx);
                                    }
                        }
            }

            global void finish(Database.BatchableContext BC){
                        if (batchType == Constants.CONST_DOWNLOAD ){
                                    if (success)//identifies the last callout was successul
                                                Utils.decideToRunAgain(mx);
                                    else
                                                //do something when last batch didnt process and encountered an issue
                        }
            }





This is the decision method:


            public static void decideToRunAgain (integer mx){
                        //This custom setting is set in retrieveData() for the number of records retrieved from the 3rd party in the last callout made in the batch Execute() if this number is less than "mx" the last callout was the last callout required
                        Configurations__c latestCall = Configurations__c.getinstance(Constants.CONST_MAX);
        integer newlatestCallInt = (latestCall != null) ? integer.valueof(latestCall.Value__c) : 0;

                        if (newlatestCallInt == mx){
                                    //the last callout retrieved the same number of records as was requested so this cannot be the last callout to make so a new batch can be created
                                   
                                    //we also need to check that the number of queued batches is less than 100 otherwise the maximum in the queue has been reached
                                    //unfortunately we cannot halt execution for a time or even continually check AsyncApexJob in a for loop waiting for the queue to drop because that would hit governor limits
//Note:JobType ='Batch Apex' identifies a batch being processed, the JobType ='Batch Apex Worker' identifies the latest record being processed in the batch and so is constantly changing
                                    if ([Select id From AsyncApexJob where JobType ='Batch Apex' and Status = 'Holding'].size() < 100){
                                                batchProcessing batch = new batchProcessing (Constants.CONST_DOWNLOAD, <<specify the other parameters>>);
                    Database.executeBatch( batch, 1 );
                                    }
        }             
            }




           

The are various different themes you can employ on this concept, such as all the logic could be pulled completely outside of the batch into separate classes, keeping the batch class lightweight and actually never needs to change

Further information
http://releasenotes.docs.salesforce.com/en-us/spring15/release-notes/rn_apex_flex_queue_ga.htm?edition=&impact=