Practice Exams:

Salesforce Certified Sharing and Visibility Designer – Performance

  1. 2.0- Performance Introduction

This is Section Two. Performance and scalability. And this is the introduction lecture as per the official salesforce certified sharing and visibility designer exam outline. The exam topics are classified into three sections declarative Sharing, Performance and Scalability and Programmatic Sharing. This is the Performance and Scalability section that has 7% exam question weight. So if the exam has 60 questions, this section will have around four questions.

This section has the following exam objectives and as you can see on the right side, this objective has two different lectures. The first lecture, Apex Sharing and Calculation Impact on Performance, deals with locking errors what are they and how to avoid them. This texture also talks about long run calculations and how to fix them, and about Apex Sharing considerations. The other lecture large Security Model Design talks about parallel sharing, rule recalculation, deferred sharing, maintenance, performance tuning and account object relationship design. Let’s now get started with section two. And thanks for watching.

  1. 2.1- Apex Sharing and Calculation Impact on Performance

This is section two, Performance and Scalability, and this lecture is about Apex sharing and calculation impact on performance. The topics of this lecture are locking errors, longrunning recalculations, deferred sharing calculation and Apex sharing considerations. Before starting with the locking errors, we have to first understand what is locking why it is used by, say, force.

While locking is a protection mechanism used by Salesforce, it is used when a record is being updated or created. When this happens, Salesforce places a lock on that record to prevent another operation from updating this record at the same time and causing inconsistencies on the data. In other words, imagine editing the same record at the same time by two different transactions.

Nothing good can happen as a result. Once the operation is complete, the lock is released and any other operation or transaction can now edit this record. But this is the catch. If a given transaction wants to edit a record, but this record is being used by another transaction and it is locked, it can only wait for a maximum of 10 seconds for the lock to be released before being locked by this new transaction.

Otherwise, if this did not happen if 10 seconds passed without being able to obtain this record then an error will show Unable to lock row record currently unavailable. So the time that an operation can wait to take a record is 10 seconds. If after 10 seconds this record is still locked, we will get this error.

Now, what are some common scenarios that can cause locking errors? We will talk about three different scenarios bulk API related records, meaning too many chart records and group maintenance table update. We’ll start with the first, which is the bulk API.

So what’s the problem with the bulk API? With bulk API, we can specify the size of a single batch. And a single batch can consist of up to 10,000 records at the same time. Another feature of Bulk API is called parallel processing, where multiple batches can be processed at the same time in parallel. Because of this parallel processing, more than one batch can be processed at the same time. So let’s give an example. Imagine a Bulk API update operation on 20,000 contacts using a batch size of 10,000 contacts and parallel processing is turned on. And as you know, contact has a look up to the account object. So what’s the problem with that? When two batches contain records pointing to the same parent to the same parent record both will try to lock the parent.

As an example, if two contact batches are being processed at the same time, and the two of them contain records that point to the same parent account record, one of the batches will attempt to place a lock on the parent account record, which can lead to the other batch throwing and unable to lock error because it waited more than 10 seconds to be able to obtain this parent account.

Record and it failed because it was locked by the other batch. So, in conclusion, because of parallel processing of the bulk API, inserting or updating records through the bulk API can cause multiple updates on the same parent record. So what are some ways that we can use to prevent this? We can try to reduce the batch size.

This is the first option that we can do because with the reduced batch size, the chance that two batches process at the same time with records pointing to the same parent record is less. We can try to process the records in serial mode as opposed to parallel. That way one batch is processed at a time and we can sort the records based on their parent record to avoid having different chart records with the same parent and different batches when using parallel mode. Another common scenario is the related records.

So what’s the problem here? Well, the main issue here is that when a parent record has too many related chart records, you are likely to encounter these lock errors as well as every time you edit the chart record, the parent is locked. The more chart records you have, the more likely that these will be edited by users, causing the parent record to be locked. Now, what are some ways that we can use to prevent this?

The first thing to do is to move some charge records to another parent. Another thing to do is to make sure that the number of records related to each parent record should be limited to 10,000 records, which is done to reduce the amount of charge records attached to a single parent record. So, in conclusion, every time a record is being edited and at the moment of, let’s say, saving this record, the parent record of this record will also be locked. So imagine that you have, let’s say, 100,000 records that point to the same parent.

You will surely have a time where two of these child records are being edited at the same time, which means that one of them will have a lock on the parent and the other will be waiting. So this will cause the lock error. Before talking about the locking error caused by group maintenance tables update, you need to know that a group maintenance table is changed any time you make changes to roles, territories, groups, users and portal account ownership.

What’s the problem with group maintenance tables? Update how can it cause locking errors? Well, the main issue here is that when the role hierarchy is updated, this triggers a recalculation of a group membership and as a result, group maintenance tables are locked by these updates.

This can result in customers occasionally receiving a could not acquire lock error.

What are some ways that we can use to prevent this? An option here is to use the granular locking feature which locks only the portions of the group maintenance tables that are touched by an operation as opposed to the whole table. This makes it less likely that any two group membership operations will conflict. And to enable this feature, you have to contact circuit support.

Another thing to do is the use of deferred sharing calculations for group membership calculations so that sharing group recalculation is performed after importing the records. Now let’s talk about long time calculations. What’s the problem? Long time calculations are encountered when a certain user owns more than 10,000 records. This is called ownership data skew.

Ownership data skew can result in long running sharing, recalculation and performance issues when that user is moved around the role hierarchy moved in or out of a group that is included in a sharing rule. Now, what are some ways that we can use to prevent this? The first thing to do is to make sure that ownership of records should be distributed across a greater number of users. No single user and salesforce should own more than 10,000 records at the same time for an object.

And for that you can add dummy users that are used to distribute records. Ownership performance empire can also be minimized by not assigning a role to the dummy user. But if the user must have a role, it should be at the top of the role hierarchy instead of at the bottom. The final piece here is a consideration when creating Apex sharing.

Apex managed sharing should be used to share records with a public group of users as opposed to specific single users. So whenever you want to add records to the shared object using Apex, use a group instead of a user, which will make it much more effective.

And that’s it for this lecture. In this lecture, we have talked about clocking, which is a protective mechanism when editing or creating a record to prevent another operation from updating the record at the same time and causing inconsistency on the data. We have talked about clocking errors when a transaction can only wait up to 10 seconds for a lock to be released.

Otherwise we will have an error. Unable to lock row record currently unavailable. We have also talked about three scenarios that can cause locking errors and how to deal with each one of them. The first one is bulk API, which is caused when two or more batches contain records pointing to the same parent, which means that both batches will try to lock the parent.

To prevent this, you can try reducing the batch size. And also you can choose to process the records in serial mode as opposed to parallel. That way one batch is processed at a time. You can also source records based on their parent records. Then we have related records, which is caused when a parent record has so many charge records. To prevent this, make sure that the number of records related to each parent’s record should be limited to 100. This is done to reduce the amount of charge records attached to a single parent record and therefore reducing the chance of getting this error. And we also have the Group Maintenance tables update, which is caused when the role hierarchy is updated.

To prevent this, you can use the granular locking feature and you can also consider deferred sharing calculations. We also talked about long run recalculations caused by ownership data skew. You will have performance issues when doing sharing recalculation when one user owns more than 10,000 records at the same time for the same object. To overcome this, you can distribute record ownership so that each user owns less than 10,000 records. And the final piece of this lecture was about Apex sharing consideration. We have mentioned that it’s much to use Apex sharing to share records with groups as opposed to users. And finally, as usual, thanks for watching.

  1. 2.2- Large Security Model design

This is section two, Performance and Scalability, and this lecture is about large security model design. The topics of this lecture are parallel sharing rule recalculation, deferred sharing, maintenance, performance tuning, and Account object relationship design. The first tool that should be considered when dealing with large security model design is parallel sharing rule recalculation.

Parallel sharing rule recalculation allows the sharing rule recalculation to run in parallel and an Async mode in the background which speeds up the process. This option is used instead of calculating the sharing rule synchronously and not in parallel, and upon completion you will receive an email notification. Parallel sharing rule recalculation is run in these cases when a sharing rule change affects access rights to a very large amount of data.

When you click on the Recalculate button for the Sharing rule on the Sharing Settings page, and when you recalculate your sharing rules on the deferred sharing page, you can monitor the progress of your parallel recalculation on the Background Jobs page or view your recent sharing operations on the View Setup audit trail page. The second tool that can be considered is the differ sharing calculation. Defer sharing calculation lets an admin suspend and resume sharing calculations. It affects sharing rule calculation and group membership calculation.

Now why is it used? Well, performing a large number of configuration changes can lead to a very long sharing rule evaluation or time out. To avoid these issues, an admin can suspend these calculations and resume calculations during an. org maintenance period. Note that this feature is not enabled by default, and to enable it you have to contact Salesforce support. So, as we mentioned, deferred sharing calculation affects the sharing rule calculation and the group membership calculation.

Sharing rule calculates access any time you change an. org default sharing module. This is the Owd of an object create, edit, or delete sharing rules create or transfer any records update public group members create or activate a user change or repair and roles add or remove users from territories, repair and territories and make changes to roles, territories, or public groups. Participating in sharing rules for sharing rule as calculation is enabled by default. This is the normal calculation that has nothing to do with the defer option.

And to be able to suspend and resume sharing rule calculation using the defer sharing calculation option, you have to enable this feature with Salesforce support and then you have to go to setup defer sharing calculations and in the sharing rule calculations related list. Click on Suspend and then you can make changes to sharing rules to rules, territories or public rules participating in Sharing rules and then click on Resume. To reenable sharing rule calculation and to manually recalculate sharing rules, click on Recalculate. As mentioned, differ sharing calculation also affects group membership calculation and group membership calculates access any time you make changes to roles, territories, groups, users, and portal account ownership.

For group membership, this calculation is enabled by default. And again, this is the normal calculation that has nothing to do with the default option, and to be able to suspend and resume group membership calculation using the Deferred sharing calculation option. Enable this feature by contacting Safe for support and then go to Setup deferred sharing calculations and in the group membership calculations related list, click on Suspend and then you can make the changes to roles, territories, groups, users or portal account ownership, and then you can click on Resume to re enable the calculation.

Now, let’s talk about the group membership performance tuning. When an admin moves a user from one branch of the role hierarchy to another branch, so many things happen in the background. Some are first of all, Salesforce adds or removes access to the user’s data for people who are above the user’s new or old role and the hierarchy.

This is mainly because the whole idea of the role hierarchy is to open up record access to the users and roles above a user’s role. So if you move a user from a role to another role, if you move, let’s say John from a role that was reporting to role X to another role that was reporting to role Y, all of the data will change because now we have to calculate who can see the record of this user. Salesforce also recalculates all sharing rules that include the user’s old or new role and the source group. It removes all of the users records from the scope of sharing rules where the old role is the source group and adds those records to the scope of rules where the new role is the source. Some problems can be encountered as a result of this.

First of all, we have the ownership data skew, which involves concentrating ownership of data so that a single user or queue owns so many records for a given object. It can cause performance issues if those users are moved around the role hierarchy. And then we have the group membership locking. When updating the role hierarchy or group membership, customers might occasionally receive a could not acquire lock error and have to repeat the operation. This error occurs because the sharing system locks the table holding group membership information during updates to prevent and compatible updates that are happening at the same time or timing issues.

Now, what can be done to tackle these problems? How can we finetune the performance of the group membership? First of all, we can limit the number of records of an object owned by a single user to 10,000, which means that a user can only own 10,000 records of a given object. We can also distribute ownership of records across a greater number of users. That will decrease the chance of long running updates occurring. If one user needs to own more than 10,000 records, we can remove the role of this user because, as we know, role is not something required for a user. And if the user must have a role to share data.

We recommend that you place them in a separate role at the top of the hierarchy. You don’t move them out of that top level role and you keep them out of public groups that could be used as the source for sharing rules. Other solutions include the use of the granular locking feature which locks only the portions of the group maintenance tables touched by an operation as opposed to the whole table. We also have the deferred sharing calculations which can also be used to defer group membership calculations so that sharing rule recalculation is performed after importing the records and when making changes to the role hierarchy process, changes to the bottom, which are the leaf nodes first, then move upwards to avoid duplicate processing. And finally you can remove a redundant path of access such as sharing rules that provide access to people who already have it through the role hierarchy. The final piece that we want to talk about in this lecture is the account object relationship design. As you know from a previous lecture, in addition to explicitly sharing records and salesforce, there are a number of sharing behaviors that are built in.

These are called implicit sharing because they are not configured by an admin, but they are built in in the system. These are some examples that involve implicit sharing. We have the parent implicit sharing which gives a read only access to the parent account for a user with access to a child record of this account. This is not used when sharing on the child is controlled by its parent. It is also expensive to maintain with many account children.

And when a user loses access to a child, salesforce needs to check all other children to see if it can delete the implicit parent. There is also the child implicit sharing which gives access to child records for the owner of the parent account. This is not used when sharing on the child is controlled by its parent. And when a user loses access to the parent, salesforce needs to remove all of the implicit children for that user.

As a result of this implicit sharing, some problems can appear. We have the problem of losing access to a child record under a Skewed account. So let’s say that we have 300,000 contacts that exist under one account. So one account and the system has more than 300,000 contacts and a user with access to one of these contacts will also have parent implicit sharing. So this user can also read the account. This is also applicable if the user has access to more than one contact.

The account access is granted by implicit sharing. Now what happens when that user loses access to the contact? How can Salesforce know if it should revoke access to the parent account? Well, salesforce needs to scan all of the other contacts to make sure that the user doesn’t have access to them either. So let’s say that you have access to one contact and because of implicit sharing you will also have access to the parent account. It will be a read only access. But let’s say that this account has 300,000 contacts.

So if, let’s say you lose access to this one contact, but what if you have access to another contact? So how can Salesforce know that? It should delete this emphasis here. It should make sure that you don’t have access to any one of the other contact. So this will take time. If we have so many contacts that belong to the same account. We also have the problem of losing access to the skewed parent account which is the exact opposite of the first example. In this example, a user has access to all 300,000 contacts because of his or her access to their parent account. What happens when the user loses access to the parent account? Well, Salesforce will take a very long time to delete all of the relevant rows from the sharing tables for all the child objects.

To tackle these problems, it is suggested that you use a public read only or Read write or guide default sharing model for all nonconfidential data configure child objects to be controlled by parent, whereas this configuration meets security requirements and configure parent child relationships with no more than 10,000 children to one parent record. And that’s it for this lecture. In this lecture, we have talked about best practices that are used when designing large security models. We talked about parallel sharing rule recalculation which allows the sharing rule recalculation to run in parallel and asynchronously in the background, which speeds up the process.

We also talked about the option to defer sharing calculation which lets an admin suspend and resume sharing calculations. A group membership performance tuning can solve problems like ownership data skew and group membership locking. The suggested solution involving tuning the group membership performance are something like limiting each user’s owned records to less than 100, removing a user’s role if more than 10,000 records are owned by this user and also moving the user’s role to the top of the hierarchy.

Other solutions include granular locking, deferred sharing calculations, a process role hierarchy changes at the bottom first and then move to the one at the top and remove redundant sharing rules. And finally, we talked about the account object relationship design.

Because of the implicit sharing which is built in within Salesforce and because this implicit sharing is using the share table, some problems might occur like recalculation and deletion of hundreds of thousands of share records. Some solutions include the use of a public read only or a read write owd wherever you can, the use of control by parent wherever you can and limiting child records to no more than 10,000 records. And finally, as usual, thanks for watching.