-
Notifications
You must be signed in to change notification settings - Fork 3
Planning your Integration for Data Integrity
- Data load - Taking a document - normally a spreadsheet, or comma separated value (csv) file, and uploading it into Salesforce using specific tools.
-
- Pros: free (download from Org), easy to use, some templates available (NPSP Data Loader Template).
-
- Cons: Manual, time consuming, not live
- AppExchange solutions - Free or Paid applications developed by a Salesforce Partner that will offer some form of integration to a 3rd party solution. NOTE: AppExchange Apps can be considered ‘Native’, i.e. built on the Salesforce Platform, or ‘Non-Native”, i.e. built on separate clouds. Ideally where possible choose a Native app, as these don’t require integration, as they will store their data directly in the Salesforce data model.
-
- Pros: Installed into the Org, Secure, Easy to use.
-
- Cons: cost, doesn’t cover all products
- Middleware/ETL (Extract, Transform, Load) solutions (Zapier, Talend, Integromat etc) - 3rd Party products that provide a connection between solutions and will provide some level of data transformation required for the connection
-
- Pros: Easy to configure with no/low code, cost
-
- Cons: Can be limited in functionality
- Bespoke/custom integration - Using configuration/code to build custom integrations using Salesforce integration capabilities (APIs)
-
- Pros: Provide the most flexibility, perform better with large amounts of data, data integrity can be accommodated in the code
-
- Cons: Developer skills required, costly to maintain.
Considerations What is the System of Record for different types of data What Unique IDs are available that can be used for matching data What will happen if fields are not completed - i.e. will they overwrite existing data? Similarly, how to guard against bad data (incorrect/incomplete submissions) overwriting existing data that is clean/complete How often does data need to move between systems? Live, every X minutes, every X days etc. Limits of integrations and possible implications of data storage. (*I feel like this is an issue with the Mailchimp for Salesforce app? Each sync stores a lot of data that uses up storage?) Costs of integrations: middleware charges for number of operations/zaps so can be difficult to scale up. E.g. in our case, one attendee registering for an event uses 5-7 ‘zaps’. SF Duplicate rule ‘alert’ being on can block creation of records by middleware (Zapier) link to Zapier support article on this. Keeping picklist values synchronised between systems - have a change process in place so that when a picklist value is added/changed/removed, the values in corresponding systems and integrations are also changed Using Non-native Salesforce apps (i.e. not built on the platform), will usually mean data is duplicated across multiple solutions How the data inserted by the integration will impact on automations within Salesforce and possibly duplicate them
Best practices Create a diagram showing the connection between the systems (Cloud to Cloud) For each connection determine the flowing; In which direction the data is flowing (into Salesforce, out of Salesforce) What data is being transferred, i.e. which fields Map the data between each system at a field level - FirstName to Given Name, etc, determine where possible duplicates may occur Identify what matching rules the integration allows (e.g. Zapier matches only on 1 field (e.g. Email to Email) while FormAssembly allows matching on multiple fields and rule logic) Where possible avoid using personal data as unique identifiers in favor of system generated IDs. For example, an email address is commonly used as an identifier but a person can have more than one email or more than one person can have access to a family email address. By using a system generated ID to represent a human it means they can change their email address without there interaction history being decoupled. If possible, validate responses to key fields, e.g. ask for email to be entered twice, use validation on post codes, use picklists for city/country Collect enough data to enable reliable matching/duplicate identification Notes: We don’t want to lose data. So the duplicate rules (if any) might let people create duplicate records and Alert / Report this behaviour.
Might be good to log integration errors on an object to keep track of actions and avoid data loss (3rd party, custom Apex Rest Services, or ETL) Maybe a retry mechanism for the failed actions (bulk daily batches?) Where possible, marking records as master (golden) records. Let’s say we have 3 Leads which more likely are duplicates, we can mark one of them as master data and design our processes on master data. When data volume is small, users can even do this evaluation manually through screen flows on the standard duplicate management component on the Lightning Pages of the screen. When data volume is large, some kind of automation might be handier. For example, daily jobs can be maintained to build relationships between duplicate records so that users can see the results on reports. On-Record manual Merge with Duplicates or Check Duplicates actions might be designed. Screen flows or daily jobs may be built to read ‘duplicate control fields’ on a custom setting or metadata and search the system with that information. Where possible, put external system identifiers on Salesforce records as an external key and upsert through this key field. Throwing Platform Events on exceptions and letting customers build their own processes to handle errors and actions.
How to Completed Data Integration with Custom Rest API
A Connected App must be created for salesforce connection. Enable OAuth Settings must be checked and chose some scope for connection. This is take a time before enabled to connected app.
Rest API service created for required mapping This expected body like this: { “FirstName”:”sara”, “LastName”:”avcı”, “Email”:”[email protected]”, “Phone”:”+905525672187”, “externalIdField”:null }
This can be calling somewhere like postman.