Managing Complexity in your Enterprise Salesforce1 Implementations

This article was originally published for DeveloperForce on December 30, 2014.  See the following link: http://sforce.co/1CRU29o

Salesforce1 is a powerful platform that allows your organization to transform itself.  But as Salesforce has grown, so too has the scope and complexity of managing the platform.  As a Salesforce Architect it is your responsibility to understand and orchestrate all the moving parts.  This can be quite an overwhelming proposition just when you consider some of the components that have been covered by this blog series:

And this just is scratching the surface…

So how do you being to approach managing Salesforce1 Enterprise Implementations?  Should all of your Salesforce resources be on the same team?  Should Salesforce teams be distributed across Development, Architecture, and Operations?  What is the best software development methodology to use?  What tools are necessary?  What are the roles and responsibilities that you need to consider when scaling Salesforce into your enterprise?  And why should you – the Salesforce Architect, care about this?

Let’s start with the easiest question first: why should the Salesforce Architect be concerned with these issues?  If you have looked into the qualification criteria for a Salesforce.com Certified Technical Architect (CTA) you may be wondering why there are so many “non-Salesforce” aspects of the certification.  Not only must you understand Salesforce inside and out; you must also understand things like Integration Architecture, Governance, Application Lifecycle Management, Change Management, Test Strategy, Release Strategy, etc.  The reason behind this is that only someone who understands the Salesforce1 platform AND these other components is truly qualified to lead a large Enterprise to their best approach for managing Salesforce.

So what goes into a good design for managing Salesforce1?  Here are some of my recommendations:

  1. Salesforce Architecture should fall under (and be a key driver) of your Organization’s Enterprise Architecture – This means following architectural guidance of any EA frameworks (e.g. TOGAF, Zachman, etc) that exist in your company.
  2. Salesforce should adhere to IT Service Management best practices – I like to use the ITIL v3 framework here.  You should consider your Salesforce Strategy, Salesforce Design, Salesforce Transition, Salesforce Operations, and Salesforce Continual Improvement as separate processes, and potentially different organizational constructs.  Salesforce should fall under any organizational frameworks here, especially related to Change Management and Support.
  3. Salesforce should remain as agile as possible – This is often a conflicting principle to the first two items in the list.  But finding the right balance is critical to achieving agility AND predictability.  There are numerous ways to do this: one is to create a clear policy of allowable production changes by administrators vs changes requiring a formal release window.  Another would be to set up very clear technical boundary points between Salesforce and other enterprise systems through the use of staging tables or Apex Web Services.
  4. Salesforce Support should follow a very structured Tiering philosophy – Primary support for Salesforce should come from Salesforce users themselves via a combination of self-service, knowledge documents, and delegated administration.  Solely relying on your Salesforce administrators will lower your scalability potential.

So what does this “look like”?  Here is a reference model that I use when trying to describe the organizational aspects of managing Salesforce.  (Disclaimer: this is only a reference model and each company is different.  What is important is to understand the Principles of the reference model, and then apply them to your own organization):

Reference Model

A Reference Model for Managing Salesforce in your Enterprise

Let’s look at the layers and describe some of the important aspects:

Center of Excellence

The Center of Excellence (CoE) means something different to almost each company you talk to, and reconciling those differences is well outside the scope of this article.  However I think some of the key aspects of the CoE are to provide Enterprise-Wide Salesforce strategy, standards, and governance.  Therefore the CoE would determine what org a new project should be built in (or whether to have multi-org in the first place).  They would also build and manage an Enterprise-Wide roadmap of business and technical capabilities for Salesforce.  Therefore your CoE would need close alignment with your Enterprise Architecture team.  Your CoE would also provide Enterprise-Wide configuration and development standards that all Salesforce orgs and teams should follow.

Lines of Business

It is quite possible (and sometimes desirable) to have multiple Lines of Business (LOB) coexisting in the same org.  One of the key success factors for making this work is to build out a Delegated Administration function inside of each LOB.  The goal is to provide immediate functional support to your business users without overloading your administration and technical teams.  When a business user is empowered to build reports, create list views, reset passwords, even fix data, etc, it creates a great relationship between the business and Salesforce IT groups.

Your LOBs will be making a number of requests that cannot be fulfilled by their Delegated Administrators.  Clearly defining theses processes (New Business Case, Logging an Incident, Salesforce Change Request, etc) and how they pass into your technical teams is very important.

Help Desk

A mature organization will utilize the Help Desk if possible.  Initial Salesforce implementations often bypass this step, but as your organization grows into Salesforce it is important to provide a single focal point of Service Requests  to your users.  It is also helpful to be part of this process when supporting complex solutions that may have many technology components such as middleware, CTI, SSO, etc.  I would recommend involving your Help Desk in your Salesforce Support processes as early as possible.

Salesforce.com Production Support

This is your core Administration or DevOps layer.  Your admins and environment managers live here – supporting users, making (pre-authorized) configuration changes, deploying new releases, triaging and researching production incidents, etc.  I recommend this group be the only users with true system admin privilege, and to use great discipline in all activities (such as logging all changes with cases, etc).  This group should be walled off from Development teams and users with only clearly defined entry points (service requests from users, deployment packages from developers, etc).

Salesforce.com Technical Oversight

This layer is responsible for understanding the detailed design of your Salesforce environment.  In a multi-org approach you may need one of these layers PER ORG.  The reason is that this group of people is responsible for commissioning and approving any changes that happen to the org.  I recommend at least three individuals in this layer: a technical architect who understands all of the code, integrations, and data conversion within the org, a functional architect (“app lead” or “solution architect”) who understands all of the business processes of the org inside and out, and a data architect who understands each object and field inside of the org.

This technical oversight team should be very hands-on and would be responsible for approving any releases to production.  They should be completing design reviews throughout the development process.  They should also be validating all changes conform to the Salesforce standards.

Salesforce.com Technical Teams

It is possible to have multiple technical teams working within the same org at the same time.  You may have one team working on a specific LOB while another works a generic backlog for all others.  Or perhaps you have one team working under a quarterly release cycle while you have another working on a weekly sprint. You may have a small internal development team but outsource large projects to a vendor.  Whatever your use case, if you take the time and discipline to setup your teams and processes correctly you can achieve this Salesforce.com nirvana as well.  Once you are mature enough with your management practices you can centrally commission development efforts to discreet teams, who can build and package your projects following your Salesforce.com standards and deployment methodology.  This is when you can really start to scale your usage of the Salesforce1 platform in your enterprise.

Support

Another key success in managing the complexity of your Salesforce environments is rigidly following a tiered support plan.  Most support issues should be handled as close to the user as possible, with only the most severe and significant issues ever being escalated to your technical teams.  Take the time to define your tiers and the escalation points.  If you need an example here is one that may work in your company:

  • Tier 0 – Customer Self-Service Sites, Knowledge Documentation, Micro-Training Solution, Delegated Administration
  • Tier 1 – Your formal IT Help Desk where issues are logged, triaged, and escalated based upon formal Service Level Agreements.  User access issues can be even be deflected here with IVR and or SSO solutions.
  • Tier 2 – This is where your core administration team would really get involved.  These should be true system issues or service requests that cannot be fulfilled by your delegated admins or help desk.
  • Tier 3 – Once issues are formally researched and escalated by the production support team would a technical team get involved.  This usually takes the shape of the original development team (if the project is still under warranty), a managed services team, a dedicated technical support team, or if all else fails your Salesforce.com Technical Oversight Team.

Understanding the Brick Wall

One of the most important aspects of the reference model is the brick wall.  The brick wall is to signify the formality of migrating change into the production environment.  The brick wall is your organization’s primary control point to ensure that only quality components migrate to production, and then only on a predictable cadence.  I hope you read my article on Deployment Strategy – then you will understand the discipline necessary to migrate successfully and ensure your production environment (and all your sandboxes) are functioning as desired.

Summary

In many ways managing Salesforce is no different than managing traditional technology platforms.  It requires good strategy, architecture, development, deployment, and support processes.  To me the difference is that that usually traditional technology platforms CANNOT survive without those processes, while Salesforce projects can often survive without them due to its robust design.  However it is impossible to scale the use of Salesforce in your company without taking the time and discipline to manage the platform as an Enterprise asset.  With the right combination of organizational design, ITSM processes, and architectural governance your Salesforce.com implementations can truly help transform your Enterprise.

Salesforce1 Enterprise Deployment Strategy

This article was originally published for DeveloperForce on December 16, 2014.  See the following link: http://sforce.co/1DGsVPX

 

In my experience with clients (both big and small), Salesforce1 deployments have a bad reputation. However in almost each instance, the deployment issues can be identified and their root-causes can be mitigated. This article will help you to plan, rehearse, and execute on your next flawless Salesforce1 deployment.

In my recent article on Environment Management I explained some of the intricacies related to managing the Salesforce1 platform metadata. However, regardless of the number of environments you maintain, there are some critical techniques necessary to manage a Salesforce1 deployment.

For the purposes of this article “deployment” will be defined as the steps necessary to complete a smooth roll-out of new functionality into your production org.  It does NOT include some of the user-adoption related issues like training and support (which are just as important if not MORE!) Lets look at a successful migration and the people, tools, and processes necessary to complete a “flawless” enterprise deployment.

People

The following roles are necessary to execute a smooth deployment.  Understand these are “roles” and not “resources” and that some people may wear multiple hats.  In large projects there maybe multiple people in these roles.  In smaller projects there may be one person playing ALL of these roles:

  1. Environment Manager – I strongly recommend a dedicated technical administrator whose responsibility is “Environment Integrity”.  I have heard many different names to describe this role (build master, deployment manager, configuration management engineer, etc.) However regardless of the name there is a very important concept: Developers SHOULD NOT have the privileges or responsibilities to deploy their own code into Production! Call me old-fashioned, however my urgency around this principle not only includes any necessary compliance requirements; it also includes taking the time to PLAN, REHEARSE, and EXECUTE your deployments.  When Salesforce developers deploy their own code it often leads to cowboy coding practices, which is exactly the thing we are trying to avoid.  The environment manager will actually execute the steps outlined in the deployment plan, including manual configuration steps and utilizing the source control and deployment tools.
  2. Release Manager – The release manager is the functional counter-part to the environment manager.  The release manager is the owner of the release calendar and the necessary flow of information between the technical teams and the users (the Communication Plan).  The release manager is also responsible for building and distributing a thorough set of release notes.
  3. Development Team – As features are designed and built in the development environments, developers are responsible for documenting their deployment steps AS THEY BUILD.  Successful deployments start during development.  If you wait to define your deployment steps until testing is complete then you are most likely going to have a very painful migration.
  4. Business Stakeholder(s) – Business stakeholders need to have visibility and authority in the release process.
  5. IT Stakeholder(s) – Internal IT stakeholders such as technical architects, project managers, and the change management team need to be fully aware of any Salesforce deployments.

Tools

The tools listed below range between sophisticated software and simple documents – however all are vital components of your deployment strategy.

  1. Release Calendar – This document maintains a record for all upcoming changes to your Salesforce environment.  It should include details on sandbox activity (planned refreshes, features moving to test, etc) as well as Salesforce release details (Pre-Release Sandbox Upgrades, Production Updates, etc).  It should be used to communicate with stakeholders as well and manage any necessary deployment moratoriums.  The Release Manager should own and maintain this document.
  2. Configuration Workbook – Complex Salesforce environments need living design documentation.  I call this the configuration workbook.  This can be a wiki, a shared spreadsheet, or complex configuration management tool.  Regardless of your method, developers, architects, admins, and environment managers should have a place where they can maintain details about each component of the Salesforce environment.  The configuration workbook can be used to track the lifecycle of components (proposed, designing, testing, released, retired) and should be maintained throughout the development and deployment activities.
  3. Deployment Package – A deployment package is all of the necessary components necessary to execute a migration to a new environment.  It typically consists of multiple components including the deployment manifest, deployment plan, and release notes.
  4. Deployment Manifest – A deployment manifest is a catalog (list) of all of the components to be migrated through the API.  A change-set would also be considered a deployment manifest.  If you are using the ANT Migration toolkit, the manifest is your package.xml file.  It can also be a simple spreadsheet.
  5. Deployment Plan – The deployment plan should list the deployment manifest(s) necessary for migration, as well as any and all manual steps.  The deployment plan should be constructed by the development team and executed by your Environment Manager.
  6. Release Notes – A successful deployment should include a functional description of the new and changed functionality.  These release notes should be built during the development process and distributed as part of the release by the Release Manager.
  7. Communication Plan – Another key element of a deployment is an effective communication plan.  This may come via email, blog, or even Chatter.  Business and IT stakeholders must have continual transparency to release timelines in order to plan accordingly.  The release manager should be responsible for the communication plan.
  8. Source Control Tool – Source control should play an active role in your deployment process.  At a minimum, code should be versioned in the source control repository.  Sophisticated teams may use branching, merging, and continuous integration techniques to manage their code and configuration as well.  You can even include most of the documents in this list in your source control repository for versioning.
  9. Deployment Tool – Teams should decide on a specific tool (or tools) for deployment.  The obvious options are Salesforce Change Sets, Eclipse IDE, or the ANT Migration tool.  However there are numerous third party tools that can aid in this process.  Whatever the tool, ensure your team’s consistent and disciplined use of the same tool throughout all of your deployments.
  10. Data Loading Tool – Data loading is a critical aspect of deployments.  Tools for loading data can also be used to automate some processes or manual changes as well (i.e. user/profile changes, etc).
  11. Metadata Comparison Tool – Having a good tool to analyze differences between orgs can be very helpful.  Verify and understand any differences between orgs as you migrate between them.
  12. Web Browser Testing Tool – Salesforce deployment’s can typically not be completed 100% through the Metadata API or change sets.  However you can record and replay your manual deployment steps with a tool like Selenium or QTP to achieve 100% automation.
  13. Change Window – This is not a tool but more of a concept. The change window is a critical aspect of your deployment strategy to ensure predictability and trust.  Deployments to production should only occur during an approved change window that both business and IT stakeholders have agreed upon.

Processes

A flawless deployment depends on the correct Planning, Rehearsal, and Execution of the following steps (at a minimum!):

  1. Plan

    1. Add the release date(s) to the release calendar using your specific project methodology and business requirements.  Validate your targeted release dates against internal moratoriums and Salesforce’s own release calendar.  (I would recommend NOT planning a large release around Salesforce’s own release dates.  Give things a couple weeks to stabilize.)
    2. Choose a deployment tool. (My preference is the ANT Migration Tool but there are many options here.) The important thing about your deployment tool is to be consistent in its use as you migrate your features through each environment.
    3. Plan to create your deployment package AS you develop.  Changes that can be automated through the API should be listed in the deployment manifest.  Changes that cannot be automated should be be documented with detailed instructions in your deployment plan.
    4. Agree to formal Quality criteria for your release.  For example, agree that ZERO CRITICAL DEFECTS will be allowed in order to release, while High Priority Defects are allowed in the release with business stakeholder approval, and perhaps 15% or lower Low Priority Defects are allowed in order to release.  Whatever the criteria is, formally document and agree upon this with your stakeholders.  On large projects this criteria is vital to maintaining a viable release date.
    5. Decide on your environment migration path.  You can see my earlier post <here> on a recommended environment plan.  Regardless of your path, determine which environment will be your “from” environment. (For example System Test Full Sandbox or Staging Developer Pro Sandbox, etc)
    6. Determine your Formal Go/No-Go governance methodology.  Who needs to approve the deployment from the business side?  Who from IT?  What formal processes must be followed (Architecture Review Board, ITIL change management process, etc) in your organization?
    7. Determine your communication plan. Who from the business needs to be informed of the upcoming release?  How far in advance?  Who in IT needs to be informed and on what recurrence?  What about your users?  How will success (or failure) be communicated?  Plan this in advance to ensure appropriate visibility of the Salesforce changes.
    8. If you are working with Salesforce’s Customer’s For Life program, communicate your release plans with your CFL team.  They can provide timely notifications both to you as well as to Salesforce internally.
  2. Rehearse (Rehearsal for production deployment begins as soon as development and configuration start on the release):

    1. Add any necessary information to the deployment plan and/or deployment manifest AS EACH FEATURE IS DEVELOPED.  It is imperative that production deployment is considered as you build the feature.  For example: if the feature requires an apex trigger, add it to the deployment manifest; if the feature requires manual changes to the security model, add those instructions to the deployment plan.
    2. Create the appropriate release notes that can be compiled by the release manager AS EACH FEATURE IS DEVELOPED.
    3. Add relevant information to your design docs and configuration workbook AS EACH FEATURE IS DEVELOPED.  This documentation is vital to maintain complex environments with numerous parallel projects.
    4. Version both your code and your configuration using source control tools.  Utilize branching and merging as necessary based upon your team specific workflow.  Have developers document the feature number when they check-in their code.
    5. Before migrating features out of each development environment, your developers should have built their own feature-specific deployment packages.  Their code should be listed in a feature-specific deployment manifest.  Their manual instructions should be listed in a feature-specific deployment plan.  Their feature-specific Release notes should ALREADY be written.
    6. As the feature is migrated into a test environment take note of any and all issues during the deployment.  Adjust your feature-specific deployment manifest and deployment plans accordingly.
    7. Apex tests should pass in ALL environments.  Use continuous integration or schedule all tests to execute on a nightly basis (in each environment).  Waiting to deal with automated testing and code coverage until near the actual production migration is a recipe for disaster.
    8. As the features migrate “upstream” through your test environments toward your production org, the feature-specific deployment plans and deployment manifests should be combined by the Environment Manager into the release-specific deployment package.  The environment manager should be responsible for migrating the entire deployment package through your test and staging environments.
    9. As the features migrate “upstream” through the test environments towards your production org, the feature-specific release notes should be consolidated by the release manager.
    10. If you would like to automate the ENTIRE release, including manual steps, create a deployment script (or set of scripts) using a web browser testing tool like Selenium.
    11. Complete a mock-deployment into your staging environment.  This should be repeated as many times necessary to fully ensure the deployment plan and deployment manifest is comprehensive.  This staging environment should be an EXACT replica of your production org.  Refresh a new sandbox if necessary to ensure the environments are identical.
    12. Validate your final mock-deployment is successful via smoke-testing and simulated transactions.
  3. Execute

    1. Validate the necessary approvals are obtained by both the business and IT stakeholders as per your Go/No Go Governance methodology.
    2. Start your change window, including notifying any appropriate parties.
    3. Freeze all changes to production.  If necessary lock-out any delegated admins to ensure your org metadata will NOT be changing without your knowledge.
    4. Lock users out of the system.  This can be done by changing their profiles or temporarily freezing them.  Use data loader tools or Apex Scripts to automate this process.  (Just don’t lock yourself out!)
    5. Validate your deployment package in production, including Running All Tests.
    6. Backup Salesforce Metadata.  This can be done by creating a new sandbox (or refreshing an old one).  This could also be done using the metadata API and a source control tool.  Regardless of your approach, this provides you with a safety net to fall back to in case of issues.
    7. Backup Salesforce data.  Depending on the scope and impact of the release it maybe appropriate to backup all data.  This can be done on a scheduled basis by Salesforce (once per week) or a nightly batch process completed by your own data loading tool. Depending on how long this process takes it might be impractical to do this during your change window.  However try to have a backup completed as close to your change window as possible.
    8. Complete any necessary “pre-migration tasks” outlined in your deployment plan.  These would be any manual steps necessary to “receive” your deployment package.  (These could even be automated with a web-browser testing tool like Selenium.)
    9. Deploy your deployment package to production using your deployment tool of choice.
    10. Complete any necessary “post-migration tasks” outlined in your deployment plan.  These would be any manual steps necessary to provision the release that are NOT supported by the metadata API.  (These could even be automated with a web-browser testing tool like Selenium.)
    11. Repeat steps 8-10 as many times necessary to deploy your entire release as per your deployment plan.
    12. Complete a metadata comparison between your “from” environment and your production org.  Make sure any differences are clearly understood or addressed.  You can use a tool like Beyond Compare, WinDiff, or even custom Perl/Python scripts.
    13. Load Data.  Once the metadata is stable, any necessary data conversions or integrations can be activated.  Closely monitor the initial transactions to ensure system integrity.
    14. Smoke test the functionality in production.  Regardless of the technical discipline used on the deployment, manually validate the deployment was successful (so your users won’t be the first ones to do so!)
    15. Re-extract all Metadata and tag your Release in source control.  A release becomes an important milestone to mark your code should you ever need to fall back to an old version.
    16. Unlock users allowing them back into the system. (Automatically with data loader tools or apex scripts).
    17. Close your change window, including notifying any appropriate parties.
    18. Send a notification of the release to the appropriate users and stakeholders as per your communication plan.
    19. Distribute your release notes to any appropriate users.
    20. Update your configuration workbook documentation to indicate the new features are “deployed”.
    21. Refresh any necessary sandboxes and/or make your deployed package available to your sandboxes.

As you can see – there are a lot of people, tools, and processes necessary to support a smooth roll-out.  Plan accordingly, rehearse your plan, and follow this criteria in order to ensure your enterprise deployments become predictable and flawless.  (And if the above list is not enough to keep you busy, make sure you have a good plan for training and support.  All of these functions are just as critical to a good deployment and user adoption.)

For more help on executing flawless deployments, consider bringing on a Salesforce.com Certified Technical Architect (CTA).  CTAs have the technical platform knowledge combined with Application Lifecycle Management experience to help your organization get started.

Salesforce1 Enterprise Environment Management

This article was originally published for DeveloperForce on December 9, 2014.  See the following link: http://sforce.co/1wXpHUD

Establishing an effective Environment Management strategy is critical for utilizing the Salesforce1 platform.  Salesforce has already established itself as a leader in cloud technology and innovation, however some aspects of dealing with the platform still require good-old-fashioned IT management skills.  Salesforce has a LOT of content around governance, environment management, and change management.  However I have noticed at multiple clients the information and options are sometimes TOO plentiful.  Many customers want to be given a recipe and a framework as opposed to designing their own solutions from best practices.  To that affect this article describes a typical environment management strategy that can be emulated on many Enterprise projects.

What do I mean by Enterprise Projects?  This article applies to your organization if you meet the following criteria:

  1. Your Salesforce landscape is made up of one or more orgs consisting of “high complexity” (lots of configuration, lots of code, lots of data).
  2. Your Salesforce projects are required to follow a formal change process under the governance of your Enterprise Architecture and/or ITSM frameworks.
  3. Your Salesforce projects consist of many resources including architects, business analysts, configurators, developers, and testers.
  4. You have have a large number of requirements often prioritized by competing business stakeholders.
  5. You have active system administrators who are making authorized (and sometimes unauthorized) changes directly in production.
  6. You constantly have issues deploying changes to production.

If this sounds familiar than you are in luck and this article was written for you.  If your projects do not sound like the above list then many of the recommendations in this article may not apply to your environment (but feel free to keep reading!)

The first thing to understand about environment management on Salesforce1 are some of the differences to typical environment management.

Your Salesforce Production Org is Your Only “Pristine” Environment

In traditional software development your pristine environment can be housed within source control and configuration management tools. In order to maintain a stable production deployment the emphasis is using configuration management techniques to maintain your code, configuration files, and deployment scripts.  Production can be rebuilt, deployed at will, and rolling back to a previous version of the code is possible.  However in Salesforce it is not possible to take production offline or to deploy the entire application in a big-bang deployment event.  Instead you are trying to migrate “differences” between your environments into production.  You can utilize source control and configuration management tools with Salesforce, however your production environment is (almost) always live regardless of the state of your configuration management procedures.

Salesforce is a heavily configured environment – much of which can be done directly in production

Salesforce supports a much higher volume of “production changes” due to its metadata based configuration design.  Therefore many changes can be done safely in production without the need for a code deployment.  While this has the effect of producing immediate value to the business (Admin Hero’s anyone?), it can have dreadful consequences on future deployments if the right controls are not in place.

The larger the difference between environments directly correlates to the difficulty in migrating between those environments

As I already described, it is not possible to move the entire code base from one state to another (i.e. Dev –> QA –> Prod, etc).  The nature of Salesforce’s migrations means that the emphasis should not be on your configuration management artifacts, but rather the emphasis should be on environment synchronization.  You may have the best code and most thorough migration plan – but if your environments are out of sync with production changes you will have a VERY difficult time deploying changes of significant complexity.

Your sub-prod environments differ from your Production environment

This is NOT uncommon on traditional projects.  However there are specific pain points around Salesforce sandboxes.  The most noticeable difference will be the database size.  Many customers do not even have a full copy sandbox, let alone multiple.  Most sandboxes will not come loaded with data, and those that do may need changes to support testing and integration.  There are other differences outside the scope of this article, however the main pain point remains: there will be overhead in each sub-prod environment to manage data and metadata.  This overhead increases as the number of sandboxes increase.

Not all changes to Salesforce can be automated via the API

Salesforce is releasing significant new features three times per year.  While many of these new features can be managed and maintained using the APIs, some changes are not possible except through the web-browser interface.  That means that maintaining a strict documentation set is VITAL for successful migration and environment maintenance.

With those prerequisites out of the way it is time to introduce my reference model for Enterprise Environment Management:

 

 

Environment Management

Let’s walk through the diagram and explain the concepts and purpose of each component, including the types of change to each environment.

#1 – Production

As mentioned, this is your only pristine environment.  Refreshes are only available from production into the sandboxes (reflected by the red lines).  Therefore your code and configuration changes are “swimming upstream” – trying to migrate successfully to the production environment from the lower sub-prod environments.

  • API Based Deployments – Your production deployments should take place mostly through a strictly controlled process via ANT or Change sets.  The only environments I would allow to migrate into Production would be the System Test (#6), Stage (#7), or PFix (#9) environments.
  • Manual Changes (Configuration and Data) – Any configuration changes made to production must be applied to sub-prod environments either manually or via a refresh.  Subsets of production data must also be pushed into multiple sandboxes to support testing, training, and integration.
  • Refreshes – You can and should refresh sub-prod environments often.  If you do NOT refresh your sub-prod environments you have 2 choices: 1) manually maintain ALL changes from Production to each “managed” sub-prod environment or 2) risk very difficult and unpredictable deployments.

#2 – Un-managed Developer Environments

Each developer on the project will need an environment.  Typically these environments are Developer Sandboxes.  The care and feeding of this environment is usually done by the developer themselves.  These environments typically only exist for the length of time it takes to migrate a feature to production, and sometimes even less.  I call these “unmanaged” environments because typically they will not be maintained by your environment manager.  A developer should complete Unit testing and apex-based automated testing in this environment.

  • Refreshes – I typically expect these sandboxes to be refreshed after a working feature has been successfully migrated upstream.
  • Manual/API Maintenance – Changes from other developers (and eventually other projects) must be successfully applied to each developer sandbox.  This “merging” of configuration and code can be done multiple ways including source code tools, change sets/metadata API, or manually.  This step is very important depending on the number of developers and the number of parallel projects inside a single org.
  • API Based Deployments – Working features should be deployed upstream using Change Sets or ANT.  Get in the habit of maintaining a strict manifest (list, index, catalog, etc) of components that need to be migrated.

#3 – Managed Project Environments

Each project needs to have an environment that can isolate changes from other projects.  Many projects may be occurring at the same time and on different release schedules.  These managed environments are the holding grounds for project features that are not ready for release.  This is where I would have testers complete Functional testing.  I call these “managed” environments because typically they are (or should) be maintained by an environment manager (a system administrator with responsibility for environment integrity).

  • Refreshes – Typically these sandboxes would only be refreshed after a project has migrated upstream.  Sometimes it is necessary to refresh a sandbox prior to this upstream migration, in which case all project changes would have to be reapplied.  Source control tools are great for this; however expect manual maintenance as well.
  • Manual Changes (Configuration and Data) – Any configuration changes made to production must be applied to sub-prod environments either manually or via a refresh.  Sub-sets of data must also be pushed into sandboxes to support testing and integration.
  • Manual/API Maintenance – Changes from other projects must be successfully applied to each project sandbox.  This “merging” of configuration and code can be done multiple ways including source code tools, change sets/metadata API, or manually.  This step is very important depending on the number of parallel projects inside a single org.  This step becomes more or less important based upon the release calendar of each project, and whether the project sandbox can afford being refreshed mid-stream.
  • API Based Deployments – When a project is ready to move into production it should be migrated upstream into the “release train”.  Changes should be migrated using Change Sets or ANT.  Get in the habit of maintaining a strict manifest (list, index, catalog, etc) of components that need to be migrated.

Note: I have included Citizen Developer Environments in this region.  Citizen Developers would be small projects that do not require multiple developers/sandboxes.  However it is important to have the Citizen Developers migrate into production using the same path as your enterprise projects (i.e. the release train).

#4 – Managed Release Train

I use the concept of a release train to describe the activities necessary to migrate successfully to production.  A release train can be a simple schedule detailing when changes can be made to each environment, or it can be a complex automated solution.  You can read about release trains more here.  For the purpose of this article I will use a very simple example of a release train:

  • Only changes bound for production should be on the train.  Therefore make sure your unit and functional testing in the lower environments is very thorough.
  • Only on a predefined basis is movement allowed from one “car” to the next.  For example, a weekly cycle (i.e. Tuesday nights, etc) would be change window in which migration could occur on the train.  The cadence would be set based upon your organization’s capacity to support deployment activities.  It is important to set a precedence on release train migrations of “often enough but not too often” as well as “highly disciplined and predictable”.
  • The number of environments inside your release train will be determined by the types of gates necessary to support your deployment process.  I have recommended three stops (integration, test, stage) but this maybe more or less depending on your specific organization.

Let’s look deeper at each stop on the release train:

#5 – System Integration

This is the environment where parallel projects comes together.  This is also a good spot for system integration testing.  Automated apex tests should be continually run in this environment to ensure a project has not broken something.  Consider the use of more sophisticated automation tools to ensure your business processes are protected.  Selenium is a good open source tool for testing the user interface.  You can also test APIs and integrations with tools like SoapUI or Postman.

  • Refreshes – I would refresh the system integration sandbox as often as is practical.  Therefore I would NOT recommend a full-sandbox with its accompanying 29 day refresh limit.
  • Manual Changes (Configuration and Data) – If you will not refresh your sandbox then expect a much higher level of manual maintenance.  All production changes must be pushed into this environment one way or another.
  • Manual/API Maintenance – If necessary code can be pushed from System Integration “back down” into a lower sub-prod environments.  This would allow parallel projects to integrate sooner.  This is a great technique if you know there will be conflict between two projects (i.e. both working on the same trigger).
  • API Based Deployments – When code has successfully passed the necessary gates it can be migrated to the next environment on the release train.  Use change sets or ANT as well as detailed documentation.

#6 – System Test

This environment should be reflective of your desired production state. This is where I would recommend acceptance testing.  You should be able to conduct end-to-end business processes in this environment.  I typically would use this environment for my Full Copy sandbox as it will change less often than other environments.  Load testing and performance testing can also be validated in this environment due to the larger database size.

  • Refreshes – I would refresh the system test sandbox as often as is practical.  Ideally you would refresh this every 29 days.
  • Manual Changes (Configuration and Data) – If you will not refresh your sandbox then expect a much higher level of manual maintenance.  All production changes must be pushed into this environment one way or another.  Data that comes into the full copy sandbox may need to be changed to support testing, integration, or even privacy requirements.  Tooling and automated scripts come in handy here.
  • API Based Deployments – When code has successfully passed the necessary gates it is ready to move to production.  But not QUITE ready… you must first complete a staged deployment.

#7 – Stage

This environment’s sole purpose is to ensure your deployments to production will succeed.  I would definitely not use a full sandbox here.  In fact I would refresh this environment prior to EACH production deployment.  That way you know you have the most up to date configuration from production.  I hope you followed my advice and used change sets or ANT throughout the migrations.  That means you have a comprehensive deployment manifest.  I also hope you followed my advice and kept detailed documentation for all manual changes.  That means you have a comprehensive set of deployment instructions.  After deploying to stage you should conduct smoke testing to ensure your deployment was successful.

  • Refreshes – Refresh often.  Enough said.
  • Manual Changes (Configuration and Data) – Theoretically you would not need to maintain any manual changes to this environment as they would all be brought over via the refresh.  However some manual steps to provision the environment will still be necessary, especially data related to any smoke testing.
  • API Based Deployments – You can choose whether you want to deploy to Production from Stage or System Test.  However I am recommending only using Stage for practicing your deployment, in which case you could tear down the environment immediately upon validating your mock deployment.

If your deployment was successful then you have a very high assurance that your production deployment will also be successful.  Repeat the steps used in the stage deployment during the production deployment.

#8 – Training Environment(s)

Training is a difficult issue due to the nature of environment setup and tear down.  It is also difficult to determine whether to have the code that is ABOUT to go live versus the code that HAS ALREADY gone live.  So you have a few choices.  You can decide on one, multiple, or all of the following:

  1. Migrate from System Test into a “Pre-Release Training Environment”.  Similar to a stage deployment but this environment would be persisted and will need a much more thourough data set to support training.
  2. Refresh from Production into a “Post-Release Training Environment”.  This will ensure you have the latest and greatest metadata to support training.  You will still need to maintain data in this environment.
  3. Train directly in production.  Use mock data that is recognized (and therefore ignored) enterprise wide.  This sounds scary but I’ve seen it work very well.  It also lowers your environment management costs.
  4. Create training applications or videos.  You can create training videos or even interactive HTML5 applications that allow users to observe small business processes.  This can often be done much cheaper than maintaining live training environments.

#9 – Pfix (Production Fix)

No matter what you do, someday a Sev 1 bug will reveal itself.  So where should you make the fix and how do you get it deployed to production as quickly as possible?  My recommendation in these situations is to generate a new sandbox only to respond to the Sev 1 defect.  The configuration or code can be fixed, immediately tested, and immediately deployed to Production.  This should ONLY be done under dire circumstances as you are bypassing many of the controls I have outlined in this article.  Just make sure to reapply the fix into your sub-prod environments as necessary.

Enough Already!

As you can see there is quite a bit of activity that needs to take place in order to orchestrate a pristine environment plan.  And this article only covers migration actives, not even production support issues like data backup, archive, etc.  I hope it is clear by now that I HIGHLY recommend hiring a full-time Salesforce.com Environment Manager to plan, execute, and monitor all of the items outlined above.  An Environment Manger would be similar to a System Administrator – however their focus is very different.  A Salesforce system administrator is typically focused on users.  An Environment Manager would only be focused on the technical infrastructure.

When consulting with enterprise customers I often encounter issues like “how to maintain sandboxes” and “how to ensure smooth production deployments”.  And typically none of these customers have a dedicated Environment Manager who is following a robust strategy and executing these detailed tactics.  An investment in a dedicated Environment Manager will allow your company to scale Salesforce1 must more effectively.

Summary

Environment management is one of the most difficult and under-realized aspects on Enterprise Salesforce projects.  Companies often under estimate the amount of work necessary to orchestrate the movement of data and metadata throughout the environments.  The right strategy, the correct resources (human and technical), and effective processes can radically accelerate and improve your consumption of the Salesforce1 platform.  You can harness the agility of Salesforce with the predictability of Enterprise class deployments.  If you are looking for help on getting started, consider obtaining the help of a Salesforce.com Certified Technical Architect to help your company define and execute upon an Environment Management strategy.

Integration Architecture for Salesforce.com

This article was originally published for DeveloperForce on November 26, 2014.  See the following link: http://sforce.co/1AWIC3h

 

As a Salesforce.com Architect it is your role to lead your company in the evolution of it’s Integration Architecture. A good architect must understand both integration architecture and integration patterns.  The difference between the two is analogous to designing the highway vs driving cars on the highway.  The Salesforce1 Platform offers architects and developers a wide array of integration technologies and recommended patterns (the cars); however without the correct Integration Architecture and technology infrastructure (the highway) your projects and solutions will be at risk for performance, scalability, data integrity, and many other problems.  This article will introduce you to the components of an effective Integration Architecture as well as walk you through a reference design similar to many of my Enterprise clients.  Hopefully this article will be used together with the official Salesforce Integration Patterns guide when architecting your Salesforce.com solutions.

What are the components of a good Salesforce.com Integration Architecture?

The Integration Architecture aligns the Business Strategy with Technical Capabilities

The best Salesforce Architecture’s are not based upon incumbent technology, singular architecture approaches, or corporate politics.  The best Salesforce Architecture’s are based upon DELIVERING BUSINESS VALUE.  What this means for the architect is to focus on what are the business’s requirements, roadmap, and needs for which you will offer technical capabilities.  In other words – you need to see where the business wants to drive, and figure out which highways and roads are necessary to support the amount of traffic.  Idealistic architecture (for example 100% Services Oriented Architecture) may cripple your ability to provide the capabilities needed by your business when they need them.

The Integration Architecture supports a mix of batch processing and real-time services middleware

Good Salesforce.com architects have learned that the best integration designs supports both batch and Service-based patterns.  This means you have multiple types of middleware at work.  I have had clients that had 3-4 different integration platforms in their Salesforce.com architectural landscape.  This is because not one solution ever can effectivly meet ALL your requirements, and once again the idealistic architecture’s are not as important as supporting the business’s needs.

The Integration Architecture is based upon Business Service Level Agreements (SLAs)

A mature organization and architect will attempt to define SLAs for data and process integrations.  These SLAs have an important role on Salesforce.com projects as they may radically affect the chosen technology and integration pattern.  The SLAs should be based upon real business needs (sorry – not everything in life needs to be real time) that help define the non-functional requirements.  If you only need to drive a few miles you do not NEED a highway.  However if you are going on a road-trip I hope you aren’t taking side-roads!  Define your solutions based upon your business’s service level requirements.

The Integration Architecture has a clearly defined standard for applying different Integration Use Cases

As your landscape evolves and your Salesforce.com expertise matures, the goal is to define a set of capabilities and standards for all Salesforce.com integrations at your company.  Each project should not have to define when and where to use what technologies, how and when to authenticate, etc.  These architectually significant designs should be standardized for your enteprise.  This is where a Center of Excellence or Architecture Review Board comes into play.  Each project should be subservient to a higher integration architecture authority.

A Typical Enteprise Salesforce.com Integration Architecture

Let’s take a look at a reference Salesforce.com Integration Architecture.  This may or may not look like your existing landscape – however this reference is based upon years of work at many Fortune 500 companies.  The reference design also does not recommend one technology vendor or solution over another – rather the goal is to understand the technical capabilities that you can (and probably should) consider as your Salesforce.com landscape matures.

Reference Integration Architecture

A Salesforce.com Reference Integration Architecture

Let’s take a look at the most common integration use-cases and how they apply to your Salesforce.com Integration Architecture.  The direction of the arrows in the reference model is not necessarily the way the data is moving, but rather the way the integration connection is being established.  This is a critical aspect of Integration Architecture as it pertains to your security and any real-time requirements.

Cloud-to-Ground (Salesforce.com Originated)

In Cloud-to-Ground use cases you are attempting to push a transaction (message or data) from Salesforce into your On-Premise infrastructure.

Capability #1 – The Salesforce.com originated message is relayed to a DMZ (demilitarized zone) service end-point.  This could be a firewall, a services gateway appliance, or reverse proxy.  You must work closely with your security team to define this layer as opening the corporate firewall to inbound web traffic is a high security risk.  This is where much (if not all) of your security authentication from Salesforce.com occurs.  Whitelisted IPs, two-way SSL, and basic HTTP authentication are some of the ways to authenticate Salesforce into the DMZ layer.

Capability #2- The message is relayed from the DMZ security zone into the trusted On-Premise infrastructure.  The message is usually destined for a Enterprise Service Bus (ESB) and durable message queue.  The ESB also would handle any transformation, mediation, and orchestration services required by the detailed integration requirements.

Capability #3- Depending on your Enterprise Architecture the ESB maybe pushing the message into the SOA infrastructure.  These web-services are providing consumer agnostic data and business process services to the Enterprise.  Salesforce.com can become a consumer (and later a producer) of these SOA services.  By re-using existing SOA web-services you can save your project a lot of time and money as oppossed to integrating directly into the source system.  If you do not have a SOA layer your project maybe responsible for integrating directly into the legacy application.

Capability #4- Another key capability for mature Salesforce.com Integration Architectures is for some sort of On-Premise database access.  This maybe a standalone database or part of a more formal Enterprise Data Warehouse (including an ODS – an operational data store).  Most commonly (but not always) in a Cloud-to-Ground scenario this transaction would be a database READ.  Salesforce.com can read data from the database in real (or near-real) time.

Ground-to-Cloud (On-Premise Originated)

In Ground-to-Cloud use cases you are attempting to push AND pull data from Salesforce from your On-Premise infrastructure.

Capability #5 – A mature Integration Architecture should be handling all of the real-time calls into Salesforce from the ESB.  However if you do NOT have an ESB, this step would occur from each separate application requiring access to Salesforce.  From a security stand-point it is much better to handle all of the calls to Salesforce from a centralized integration middleware.  You can use oAuth or user/pw session based authentication to Salesforce.  The middleware may already have a session with Salesforce so that you don’t need to log-in again for every transaction.

Capability #6 – Many integrations can be accomplished in a batch design.  This is often the cheapest and fastest way to get data in and out of Salesforce.com.  I would recommend a robust ETL solution is necessary for all Salesforce environments.  (This maybe as simple as Salesforce’s Data Loader Command Line Interface).  The role of the ETL is to move large data volumes using the Bulk API where possible.

Capability #7 – As a Salesforce.com architect you have a responsibility to your company or client to off-load your Salesforce data into a replicated copy.  My argument for this is that Salesforce’s database is not likely to have outages or lose data, however you and your team are VERY likely to break your own data via user error, bad-code, or run-away processes.  By replicating your data off-line you now have the power to restore data to an earlier state without engaging Salesforce (who may or may not be able to restore it exactly as necessary).

Capability #8 – The ETL is also responsible for moving data in and out of your database infrastructure.  Often data is necessary to be staged in Salesforce (Accounts for example) from the EDW.  Also pulling data down from Salesforce into your EDW maybe much easier when done using batch processing patterns.

Cloud-to-Cloud

Capability #9 – If you have multiple orgs (see my article on multi-org strategy) you will often have the need to integrate between the Orgs.  Salesforce makes this (sometimes too) easy via Salesforce2Salesforce.  You can also directly contact another Org via RESTful web services integration.  Salesforce.com’s road-map includes the ability to consume other org’s data via OData which may also be a good way of providing read-only access across your org landscape.

Capability #10 – Salesforce’s robust integration technology makes it very easy to integrate point-to-point with other systems.  While this would be recommended for some solutions (Google Maps mashups, etc) I would recommend staying away from this design in large enterprises.  The more that Salesforce is made to be the hub of integration activity, the more time you will spend building, maintaining, and troubleshooting integrations as opposed to building new business value.  This is a trap I have seen many companies fall into.

Capability #11 – Rather than using Salesforce.com to be your hub of cloud-to-cloud integration activity many companies have moved towards Cloud based Integration-as-a-Service packages.  While not true ESB’s per se, many integration vendors have started providing cloud based solutions for managing your cloud-to-cloud use cases.   Because these solutions are specifically tailored for Salesforce.com (and other popular SaaS vendors), the time to build and deploy an integration can be radically reduced as opposed to using the ESB.

Cabability #12 – The cloud service bus’s can handle service mediation, transformation, routing, error handling, etc to your other cloud based end-points.  Having to build durable and resilient integration solutions inside of Salesforce can be expensive and very complicated.  Middleware should be used where and when possible.

Cabability #13 – Some companies prefer to broker all integrations through their ESB, including Cloud-to-Cloud use cases.  My warnings here are this: the cost of highly resilient ESB’s can be EXTREMELY high.  If the service levels between Salesforce.com and Workday, for example, must go through your on-premise technology, you maybe shooting yourself in the foot.  Now your “Cloud” solution is piggy-backing on the same technical infrastructure, cost, service levels, and release timeline of your On-Premise solutions.  Tread lightly and make sure to design your Integration Architecture first and foremost about delivering BUSINESS VALUE.

In Summary

I was previously an Enterprise Architect working with Service Oriented Architectures before becoming a Salesforce.com Certified Technical Architect.  When I was first introduced to Salesforce I was shocked to see either a 100% dependency on batch integration technology or a 100% reluctance to use anything but Real-Time services design.  However one of the reasons I enjoy what I do so much is that I have learned that there is NO GLASS SLIPPER in Salesforce Integration Architecture.  One size does not fit all and no one solution can be the best for all or your requirements.  It is your responsibility as the architect to analyze, recommend, and implement a variety of integration capabilities that will enable your team, clients, and company to realize the powerful transformation of moving to the Salesforce1 platform.

Designing Enterprise Data Architecture on Salesforce.com

This article was originally published for DeveloperForce on November 5, 2014.  See the following link: http://sforce.co/1tAKBEQ

 

Designing a good data architecture (DA) on Salesforce1 can often be the difference between a great success story or an epic failure.  The DA of Salesforce affects almost ALL areas of your org – and therefore is not to be taken lightly or rushed into quickly.  There are some key differences between Salesforce and other platforms that are critical to understand when designing your DA.  Unfortunately most implementations do not have an enterprise perspective when they are being designed.  This leads to significant refactoring as you increase your usage and knowledge of the platform.

First of all its important to understand the differences between Salesforce and other database applications.

  1. Salesforce looks and feels like a traditional OLTP relational database.  However under the covers it has been architected very differently to support multi-tenancy, dynamic changes, and platform specific features.  Do NOT assume data models move seamlessly from the old world into the new.
  2. Your data is co-located alongside other tenants.  While this may cause security concerns, it will affect you more in terms of learning the scalability thresholds and governor limits that are placed upon the platform.
  3. Unlike traditional databases, Salesforce data cannot be dynamically joined through its query engine.  Rather the “joins” are based on the predefined relationships between objects.  Therefore the data model design is critical and understanding reporting requirements UP-FRONT is a key success factor.
  4. Salesforce is not a data warehouse (nor do they want to be).  The recommended data strategy is to have the data you need and to remove the data you don’t.  While that sounds like a pretty simple concept it is much more difficult to realize.

Let’s walk through the process of designing an Enterprise data architecture.  An effective DA design will go through most if not all of the following steps:

Step 1 – Define Your Logical Data Model (LDM)

A good DA starts with a good logical design.  This means you have taken the time to document your business’s description of the operations.  You have a catalog of business entities and relationships that are meaningful and critical to the business.  You should build your logical model with NO consideration for the underlining physical implementation.  The purpose is to define your LDM that will guide you through your data design process.  Make sure to take any industry relevant standards (HL7, Party Model, etc) into consideration.

Step 2 – Define Your Enterprise Data Strategy (including Master Data Management)

Outside the scope of this post (but totally necessary on an Enterprise implementation) is to define your enterprise data strategy.  Salesforce should (theoretically) be a critical component but also subservient to your Enterprise Data Strategy.  It will affect Salesforce DA in some of the following ways:

  • Is there a Customer Master or Master Data Management system and if so what LDM entities are involved?
  • What are the data retention requirements?
  • How and when does the Enterprise Data Warehouse receive data?
  • Is there an operational data store available for pushing or pulling real-time data to Salesforce?

Step 3 – Document the Data Lifecycle of Each Entity in the LDM

Each entity within the LDM will have its own lifecycle.  It is critical to capture, document, and analyze each specific entity.  Doing so will help you know later how to consolidate (or not) entities into objects, how to build a tiering strategy, and even how to build a governance model.

  • Where is the source of truth for each entity?  Will Salesforce be the System of Record or a consumer of it?
  • How is data created, edited, and deleted?  Will Salesforce be the only place for these actions?  Will any of those actions happen outside Salesforce?
  • What are they types of metrics and reporting required for this entity?  Where do those metrics currently pull data from and where will they in the future state?
  • Who “owns” the data from a business perspective?  Who can tell you if the data is right or wrong?  Who will steward the entity and ensure its quality?
  • What business processes are initiated by this entity?  Which are influenced?
  • Get some estimates on data sizing for master entities and transactions.  This will be very important when large data volumes (LDV) are involved.

Step 4 – Translate Entities and Cardinality into Objects and Relationships

Its time to start translating your LDM into a Physical Data Model (PDM).  This is an art and not a science and I definitely recommend working closely with someone very knowledgeable on the Salesforce platform.

  • Consolidate the Objects and Relationships were possible.  Assess where it makes sense to collapse the entities, especially based upon common relationships to other objects.
  • This is where record types become an important aspect of the Salesforce design.  A common object can be bifurcated using record types, page layouts, and conditional logic design.  A common architectural principle that I use is: “The More Generic You Can Make a Solution the More Flexible it Becomes”
  • The tradeoff to consolidating objects is to consider the LOBs that will be using the object and your (forthcoming) tiering strategy.  It may make sense to isolate an entity for technical, governance and/or change management reasons.
  • Another downside to consolidating objects is the added need to partition your customizations.  Be prepared to write different classes/web services/integrations at the logical entity level.  For example, if 6 entities are overriding the Account object you will need custom logic for Business Customers vs Facility Locations vs Business Partners, etc – all hitting the Account object under the covers.

Step 5 – Determine whether to override Standard Objects

Another difficult decision becomes when to override a standard object vs building a custom object.  Once again this is more an art than science but there are some key considerations along this topic:

  • Why do you need the standard object functionality?  Does Salesforce provide out of the box functionality that you would have to build your own if you go the custom object route?  (e.g. Case Escalation Rules, Account Teams, Community Access, etc)
  • Consider your license impacts between custom vs standard.  Standard objects like Opportunity and Case are not available with a platform license.
  • Don’t get carried away.  Every “thing” in the world could be generalized to an account object while every “event” in the world could be generalized to a case.  These types of implementations are very difficult to maintain.

Step 6 – Define Enterprise Object Classification and Tiering Strategy

Data Tiering

Object classification and tiering is an important component to an enterprise Salesforce DA.  I try to classify objects across 3 different categories – however you may have more or less depending on your architecture design.

  • Core Data – This is data that is central to the system and has has lots of complexity around workflow, apex, visualforce, integrations, reporting, etc.  Changes to these objects must be made with extreme caution because they underpin your entire org.  Typically these are shared across multiple lines of business (e.g. Account object), have LDV (e.g. Tasks), or complexity (e.g. Sharing objects).  Information Technology should lock down the metadata and security on these objects pretty tightly.  It will be up to IT to maintain the data in these objects.
  • Managed Data – This is data that is core to a specific LOB but does not affect other areas of the system.  Depending on the number of LOBs in the system this may or may not be something like the Opportunity or Case object.  The objects still have high complexity in their workflow and customization requirements, however the object and code is regionalized to a single LOB.  In this layer you can enable Business Administrators to manage the data for their LOB.  In fact pushing data management of these objects into the business is critical to your ability to scale on the platform.
  • Delegated Administration Data – These are typically custom objects that have been created for a specific LOB and are completely isolated from other areas of the system.  They are typically “spreadsheet apps” or mini applications that have a very simple workflow and business processes.  Therefore the data AND metadata of these objects should be put into the hands of business and delegated administrators.  These objects become great candidates for Citizen Developers within the enterprise because you enable the business to make their own apps within a sophisticated environment.

You can also use your tiering strategy to assist with archiving (below).  As you can move data out of the core layers and into delegated layers you will increase your scalability, agility, and even performance.  Just make sure you are not creating data duplication and redundancy in your DA.

Step 7 – Design Your Security Model and Data Visibility Standards

Another architectural principal I recommend for Enterprise DA is “The Principle of Least Privilege“.  This means that no profile should ever be given access to an application, object, or field unless specifically required.  However I do NOT recommend making the entire sharing model private.  This would cause significant and unnecessary complexity in your sharing design.  Unnecessary private data will lead to data duplication issues and could also lead to performance impacts.

Step 8 – Design Your Physical Data Architecture

It is time to build a PDM.  I call this “framing” as it will be the first time you can start to see what your solution will look like within Salesforce.

  • Start to map out an object for each consolidated entity from your LDM.
  • Which entities for your LDM are necessary to be persistent in Salesforce?  Which entities can be kept off platform?  Data that is not used to invoke business processes (workflow/triggers/etc) is a candidate to be kept off platform.
  • Do NOT create objects for lookups where possible.  Utilize picklists and multi-picklists as much as possible in an attempt to “flatten” your data model.
  • Salesforce objects are more like large spreadsheets.  There will be lots of columns and denormalized data in many cases vs a more traditionally normalized database.
  • Take your earlier volume estimates for your LDM and reapply them to your consolidated design.  You should have a rough order of magnitude now for each consolidated entity you are considering.  Try to get specific volumes at this point.  It becomes very important for licensing and LDV.
  • Make sure you have considered many-to-many relationships as these “junction objects” have the capability to grow very large in Enterprise environments.
  • Any objects with volumes in the millions should be considered for LDV impact.  While outside the scope of this post you may want to consider changes to your PDM to minimize volumes where possible.
  • Data replication and duplication in Salesforce is OK.  (Data architects please sit back down.) Sometimes it is necessary to support the business processes utilizing these frowned upon methods.  Salesforce actually works best when you break some of the traditional rules of enterprise data architecture – especially normalization.

As far as data off platform is concerned… I recommend keeping data off platform that you don’t need.  You want Salesforce to be a corvette in your enterprise (fast, agile, sexy) vs a utility van (slow, unkept, and kind of creepy).

Step 9 – Define Enterprise-wide and Org-Wide Data Standards

It is time to build a set of standards when it comes to your data model.  You need to consider Field Labels vs API names, and common fields to maintain on each object.  Also coming up with an enterprise model for your Record Types will be critical.

The following list is what I like to do:

  • I create Record Types on EVERY object.  The first record type usually has a generic name like <Company Name>. (e.g. Dell, IBM, Google, etc).  It is much easier to refactor objects in the future if you start with record types from the beginning.
  • LOB specific Record Types always begin with a LOB designator (e.g. CC – Incident for “Contact Center”)
  • LOB specific objects and fields should have the LOB designator in the API name.  (e.g. CC_Object__c, CC_Field_Name__c)
  • Depending on the number of fields you expect to have on a given object, consider tokenizing the API name (e.g. CC_FldNm__c).  This will greatly save you later when you are hitting limits on the number of characters that can be submitted in SOQL queries.
  • I create common fields on EVERY object.  Fields like “Label” and “Hyperlink” can hold business friendly names and URLs that are easily used on related lists, reports, and email templates.
  • I usually copy the ID field to a custom field using workflow or triggers.  This will greatly assist you later when trying to integrate your full copy sandbox data with other systems.  (I never use the Salesforce ID for integration if it can be avoided).

You may or may not want to follow these.  The point is create your standards, implement that way from the beginning, and govern your implementation to ensure your standards are followed.

Step 10 – Define Your Archive & Retention Strategy

Even though Salesforce has a great history and reputation for keeping your data safe, you still have a responsibility to your organization to replicate and archive the data out of Salesforce.  Here are some considerations:

  • It is more likely that you will break your own Salesforce data than for them to suffer a data loss.  Salesforce will assist you should you need to try to recover your data to an earlier state, but a mature enterprise needs to have the capability to be self-sufficient in this area.
  • Weekly backups are provided from Salesforce and maybe fine for SMB, however I recommend a nightly replicated copy.  There are partner solutions that will make this easy – or you can build a custom solution using the Salesforce APIs.
  • I would use your replicated copy for 2 purposes.  One would be to feed your data warehouse as necessary.  The other is for recovery purposes.  I would NOT use the replicated copy for reporting and I would try to not use the replicated copy for any real time integration requirements.  This adds an undue burden to your technical environment and ties your Cloud solution into your on-premise infrastructure. Tightly coupling Salesforce to your existing IT infrastructure may cripple your agility and flexibility in the cloud.

Step 11 – Define Your Reporting Strategy

Your architecture is defined.  You know what data will be on platform, what data will be off-platform, and where the best sources of all of this data is.  Its time to define a reporting strategy for your Salesforce data.  Your strategy will be different depending upon your data architecture – but I will suggest the following guidelines that I have used successfully in large enterprises.

  • Operational Reporting should be done on Platform if possible.  The data necessary to support a business process will hopefully be on platform long enough to run your operational reports.
  • Analytical Reporting should be done off Platform.  Use traditional BI tools built upon your data warehouse for long running, trending, and complex reports.
  • Use the out of the box reporting and dashboards as much as possible.  Try to get your executives and stakeholders the reports they need coming directly from Salesforce.
  • Consider mashup strategies for off platform reporting solutions.  Some third parties offer applications that will integrate seamlessly into the Salesforce UI so users never need to leave the application.
  • Consider building custom reports using Visual force or Canvas where appropriate.  The more you keep your users in the platform the more influence and momentum you will maintain.  As users move to other tools for reports, so too will their interest, attention, and eventually funding.
  • Don’t report out of your replicated Salesforce database.  Move that data into the data warehouse if analytical data is needed and keep users in Salesforce for real-time data.  Offline Salesforce reports will just confuse users and cause undue issues regarding data latency and quality.

Step 12 – Repeat

Just like Enterprise Architecture, defining your Data Architecture is iterative and will continually improve.  Each time you go though an iteration you will increase in your understanding, maturing, and competency on the platform.  And as you improve your data architecture, so too will the business value of your Salesforce deployment increase.

Other Helpful Resources

Salesforce’s Best Practices for Large Data Volumes

Multi-Org vs Single-Org Architecture Strategy

This article was originally published for DeveloperForce on October 23, 2014.  See the following link: http://sforce.co/1tfz4x7

 

One of the most important decisions throughout your Salesforce journey is to decide your “org strategy”.  What this really means is: “How many instances of Salesforce will you have in your company?”  As a Certified Technical Architect I mostly deal with Fortune 500 companies.  The larger the company the more complex this question becomes. It is one of the most foundational and architecturally significant choices that must be made – this decision will impact all future Salesforce initiatives and designs.

Here are the questions that I ask my clients in order to make a recommendation regarding the most appropriate org strategy:

Question 1 – What is your Enterprise Architecture operating model?

The most important consideration is that your org-strategy is an Enterprise Architecture level decision and should not be made without a thourough understanding and analysis of your Enterprise Architecture model.  In Enterprise Architecture as Strategy (Ross et al.) the authors described an enterprise architecture operating model using the following 2×2 matrix where the axis are A) Business Process Integration and B) Business Process Standardization.

Ent Arch Operating Model

 

  • Companies high in the unification space should have as few orgs as possible.
  • Those in the replication quadrant can (and probably should) have multiple orgs, however you should consider deploying a managed package from a central org into all of the replicated business units.  That way you can provide local control/administration but maintain your pre-defined standard business processes.
  • Companies in the Coordination quadrant should try to stick with as few orgs as possible.  The high process integration requirements can be met with a web services integration strategy, however the potential value to the company is much higher in a single org.  A proper governance strategy and an effective service delivery strategy are necessary to keep multiple Lines of Business’s (LOBs)  happy while executing in parallel.
  • Companies in the Diversification quadrant will probably always have multiple orgs.  The work to consolidate different business processes and data models into one org is usually very complicated; plus it is very uncommon for the diversified business units to be accustomed to working in the same space as the other LOBs.

Question 2 – Who is paying for it?  (What is your scope of control?)

Depending on the number of deep pockets in a company, you may have no choice when it comes to org strategy.  Orgs might be popping up all over the company.  You might not be dealing at a senior enough level of the company for Question #1.  Therefore you are often at the whim of those controlling the purse strings.  If this is the case it is your job as the architect to make sure the company moves in the following direction:

  • You need to have coordination and awareness of the entire Salesforce community in your company.  This would be the precursor to setting up a Global Center of Excellence.
  • You need to sit with enterprise data architects to understand what data objects should or should not be in each org – and that each org is aware of any possible duplicated data objects.  Data problems will be the death of Salesforce in your company.
  • You need to push your company toward setting up a Global Center of Excellence (CoE).
  • The CoE should design and build a “Reference Architecture” that defines where and when an additional org is reasonable or necessary.
  • You need to get a seat at the table for Question #1 and make sure that you are making architecture decisions at the correct level of the company.

Question 3 – Are you prepared to deal with the complexity of having multiple LOBs inside a single Org?

There is a LOT of complexity in designing a Salesforce org to support an enterprise with many different LOBs.

  • Make sure you have good naming standards across your data model, configuration design, and custom code.
  • Your security design will be very complex.  The design of your profiles, permission sets, role hierarchy, sharing model, and public groups can be difficult to design, difficult to maintain, and especially difficult to refactor.
  • Your apex design will need to be very mature to support a large enterprise model.  Adhere to strict separation of concerns and make sure you have a technical architect overseeing all aspects of the org.
  • Your data model becomes much more complex and controversial as you increase your scope in the org.  Make sure you are an active participant in the Enterprise Data Strategy and employ a Salesforce Data Architect who specifically can manage the design and maintenance of the data model.

Question 4 – How much change can you effectively manage in a single org?

Depending on the number of parallel work streams you may have difficulty supporting many initiatives at once.  This is especially difficult unless you setup an effective governance model.

  • A global CoE should manage the business architecture and common standards of ALL Salesforce instances across your enterprise.
  • Each org should have its own governance committee (which would be the CoE in a single org environment) that manages the product management and sets direction and priority within each org.  They also need to define specific Roles and Responsibilities within the org and a RACI matrix for any and all types of changes.
  • Each org should have its own architectural review board that will actively design, review, and approve technical changes to the environment.  This includes ALL configuration, the data model, and custom code.
  • Each org should have its own tiered administration teams to enable the LOBs to make some changes by themselves without requiring production releases.

Question 5 – How many Lines of Business (LOB) can you support?

As the number of LOBs in your orgs increase, typically your overhead will increase as well.  The request backlogs will increase, the time to market decreases, and generally stakeholders become more anxious.  This drives the business to want separate orgs.  This, however, is NOT a good reason to split orgs.  You can solve this with the following tactics:

  • Setup a CoE to manage the stakeholders and articulate a clear roadmap of functionality releases across the LOBs.
  • Tier your administration services allowing the LOBs to make their own changes through the use of delegated administration and a tiered data architecture.
  • Charge back your support costs to the businesses by establishing a license tax.  This will allow you to fund development and maintenance resources to support all the LOBs without the need for large capital efforts which would slow time to market. Different LOBs can pay a higher tax to have more dedicated resources focused on their backlog requests.
  • You may have smaller LOB’s pooled around a CoE and centralized resources while your larger LOB’s fund and support their own org.

Question 6 – What are the regulatory, compliance, or security requirements?

This is a subject that has a high weight in splitting orgs.  In some industries, especially healthcare and financial services, there will be certain factors requiring you to setup a walled-garden.

  • Personally Identifiable Information (PII) may have much different security requirements than other business data.
  • Some LOBs have complex encryption requirements, IP restrictions, and confidential business data that create undue burden on the development and support team if those same requirements were placed upon all the LOBs.

Question 7 – Will you be using Chatter?

Unless and until Salesforce were to release cross-org Chatter support it can become a big hindrance of social collaboration if your users are spread across orgs.  This is especially painful if you have some users that use multiple orgs.  There are partner solutions and custom development options – but this can be very complicated to implement correctly.  I do not believe Chatter should be the driver that pushes you into one design over the other – however it is important to understand the scope of social collaboration desired across the enterprise.  There could be a significant cost impact.

Question 8 – Are you willing to pay for the overhead of multiple orgs?

Costs increase as the number of orgs go up.

  • There are increased licensing costs if users need to access multiple orgs.
  • Often 3rd party license costs increase as well depending upon the solution.
  • Integration costs increase as you attempt to integrate data and business processes across the orgs.
  • Environment management costs increase in a multi-org design with the added complexity of multiple sandboxes and release cycles.
  • You should plan to use SSO because maintaining multiple user names and passwords is a terrible user experience and will lead to a lot of wasted support time resetting accounts.

Question 9 – Who will modify your org(s) and who will maintain your org(s)?

  • If you have multiple development teams in the same org you will have a much higher level of overhead and governance than if each team was in a different org.
  • If you decide to have multiple teams in the same org, you will need to build a robust environment management and release strategy.  Be prepared to employ a dedicated environment manager to handle the movement of code and metadata across the environments.
  • There should be as few true system administrators in the org as possible.  Custom profiles and permission sets are critical to keep Roles & Responsibilities in check.
  • Each LOB should have and maintain their own “Tier 0” support layer.  This is usually the business user subject matter expert (SME) most familiar with Salesforce that has been given special privileges to fulfill business change requests in production (for example creating a report or adding users).
  • You should have a single service desk and entry point for all of your Salesforce support (Tier 1).  If you have properly integrated with your IT help desk this can mask having multiple orgs and multiple support teams.
  • Each org would need its own Tier 2/3 support teams to troubleshoot issues.
  • Multiple orgs will either all need their own release management process OR your existing release management process will become much more complex.

Question 10 – Have you reached the limits of what you can do in a single org?

Obviously a question you would not be asking initially – but this can be a big draw to split orgs.  Each org gives you a fresh set of limits that can help you get around issues you may have not been able to solve in a single org.  Before you setup a new org to increase API calls or custom code limits, talk to your Salesforce Account Team.  Some of those limits may be able to be increased which would save you from going down the wrong architectural path.

Question 11 – What is your integration strategy for business processes and data across multiple orgs?

Inevitably in a multi-org environment you will need to integrate across the orgs.  Salesforce makes this dangerously simple with Salesforce2Salesforce integration design.  Before too long you have a spider web of integrations, data replication, and very brittle point-to-point connections.

  • Consider integrating through your Enterprise Service Bus.  While this increases your integration timelines it will also keep you nicely decoupled.  If your enterprise integration strategy includes a services oriented model – your will be decoupled via business services – which should have a close correlation to your object model within Salesforce.
  • Look at Salesforce’s roadmap.  We are all anxious for their external object support via OData.  This will hopefully simplify integrations especially around master data that are only needed for reference.
  • Consider a hub & spoke architecture in which one of your Salesforce orgs is setup with the master data  that would be shared out to other orgs.  (Via OData, S2S, web services, etc)  Not following this pattern may lead you to a very spaghetti design.
  • Consider using managed packages to deploy functionality from your hub org to your spoke orgs.
  • Consider using a reporting and/or collaboration hub where all all necessary data is pushed to a separate org for consolidated visibility.

Question 12 – Do you have any Customer 360 or global case management requirements?

I saved this question for last because sometimes this is the deciding factor for a single-org strategy.  However multi-org strategies can still support these requirements – only with added complexity and cost.  Using a hub/spoke or reporting hub can also help you achieve your Customer 360 requirements.

 

I hope this list helps you define your criteria and strategy for your Salesforce orgs. The most important thing that I can say is that your org strategy is an ENTERPRISE ARCHITECTURE level decision and should be treated as such.  Don’t allow the pressures of politics or timelines push you in the wrong direction.  Know the pros and cons of both approaches, and make the decision that will help drive the most long term value from your business and from the Salesforce platform.  You may also want to reach out to a Salesforce.com Certified Technical Architect.  This list of criteria is secondary to the deep analysis and platform experience necessary to make the right decision.

If you have any other key considerations please leave them below in the comments and we can discuss them.

Building an Enterprise Architecture with Salesforce.com

I’m a huge fan of the SFDC platform. It has the potential to transform businesses if implemented and managed correctly. Unfortunately I’ve seen many companies come up short when trying to reap value from Salesforce. There are countless reasons for this but here are some of the top reasons I’ve seen:

Strategy

  1. No organizational vision or direction for the company’s use of Salesforce
  2. No roadmap and product management strategy that ties the platform’s use to corporate goals.
  3. No formal architecture, governance or change management strategy leading to an evolutionary design of either silo’d functionality or a mis-use of multiple Salesforce instances.

Environment

  1. The original implementation was delivered with little-to-no technical understanding by the customer.
  2. A lack of understanding between how and when to use configuration vs custom code.
  3. The implementation consultants delivered “functional value” only and therefore lack any standards in the configuration design and coding layers.
  4. There are no patterns, no modularization, and no chance to extend the functionality without significant refactoring.
  5. Test coverage is completed only as an after thought and lacks any kind of design or assertions.  There is a high risk you will break something when you change configuration or code.
  6. Lack of any forms of living documentation that capture your design and the reasons for that design.

Adoption

  1. Unwillingness or inability to retire legacy systems has kept people from using the system.
  2. Lack of training or communication to end users
  3. Data Quality in the new system is low and the outputs of the system are not trusted

So how do you prevent these types of issues?  How do you fix them if you are already a victim of this problem?  My recommendation to you is this: Build an Enterprise Architecture with Salesforce.com

How do you do this?  It is not easy and it will take time, money, and platform knowledge.  But if done correctly Salesforce can become a differentiator for your business.

This post and the ones that will follow it will walk you through the process of establishing an EA for Salesforce.  The main framework I am using is the TOGAF Architecture Development Methodology.  If you are not familiar with TOGAF, I recommend reading up a little on the framework.  TOGAF is an approach for designing, planning, implementing, and governing an enterprise information technology environment and is an open standard of The Open Group Architecture.

TOGAF - ADM

The TOGAF v9.1 Architecture Development Methodology from The Open Group

Throughout the post I will follow the phasing of TOGAF and point out some relevant questions and considerations as you shape your Salesforce Architecture.  That being said here is roughly the process you will follow as you build your plan:

  1. Define Your Architectural Vision for Salesforce
    1. Who are your stakeholders and your business goals?
    2. What are the corporate goals that will influence your vision?
    3. What are your architectural principals?
    4. Where is your architectural repository and how will you manage your artifacts?
  2. Design Your Business Architecture and Strategy
    1. What Lines of Business will use Salesforce?
    2. What are your business requirements?
    3. What business processes will be implemented on or influenced by SFDC?
    4. What building blocks of your business will be built on SFDC?
    5. What is your org strategy?
    6. Who are your actors and possible license choices?
  3. Design Your Information System Architecture
    1. How do you design an Application Architecture on SFDC?
    2. How do you design a Data Architecture for SFDC?
    3. What is your application rationalization plan?  What will be added or retired with SFDC (and when)?
    4. What is your Gap/Fit and what will you do about it?
  4. Design Your Technology Architecture
    1. What is your integration architecture?
    2. What is the right architecture for custom development?
    3. How will you manage your technical environments?
    4. What are your technical risks and how will you mitigate them?
  5. Design Your Implementation plan
    1. What is the right methodology to use?
    2. What is your SFDC Architectural maturity?
    3. What is the appropriate release strategy?
    4. How do you build a roadmap?
  6. Plan your migrations
    1. What is your Cost/Benefit analysis for each release?
    2. What are you project dependencies?
    3. What is the right deployment strategy?
    4. What is your test strategy?
    5. What is your data conversion plan?
  7. Design your governance plan
    1. What groups will you need to integrate your efforts?
    2. How do you establish an effective CoE?
    3. What is an Architecture Review Board?
    4. How do you setup a system administration model?
    5. How do you manage your vendors?
    6. How will you ensure adherence to your architectural design and standards?
  8. Define your Change Management Plan
    1. How do you effectively manage change?
    2. What types of changes can be made in production and by who?
    3. What types of changes should be made in a project vs bug fix?

My recommendation would be to walk through the list in order and try to answer all of the questions at least once – to the best of your ability at the time.  Then return to back to the first section (architectural vision) and start to refine into a deeper level of detail.  TOGAF is cyclical and so too is your design and implementation of an EA for SFDC.  It follows the concept of continual improvement that will evolve each time you iterate through the cycle. So get busy and build out your EA for SFDC. Your users, your technology teams, and your business stakeholders will thank you for it.

Salesforce.com Advanced Developer Certification

It took me a couple tries but I finally finished my Force.com Advanced Developer Certification. It was harder than I expected – and easier. I wanted to share a little bit of my journey and some tips for those of you working towards the same certification.

My background: I already knew how to code. I used to write C/C++ and if I am hard pressed I can still throw down some mean Java. But I have since moved on (so I thought) in my career to really important things. You know, like management (meetings), architecture (drawings), governance (LOTS of meetings), and project management (taking notes). When I was introduced to Salesforce I had no interest in going back to my coding roots. But as I got to know the platform I realized how much I loved it. I decided to go after the architect certification (read about that here).  The only prerequisite for the CTA (Certified Technical Architect) is the Salesforce.com Developer certification.  I had a feeling that would not be enough pre-requisite knowledge for me and that in order to be a great architect I would need to know the ENTIRE platform.  So I set a goal of obtaining all of the certifications, including the Advanced Developer.  (BTW I was correct – developer certification in no way prepares you for CTA!)

In order to learn Apex and Visualforce I started with Jason Ouellette’s book: Development with the Force.com Platform.  This was a great book as it was very easy to follow, didn’t assume I was an idiot, and got right into the heart of true Force.com development (which sadly yes DOES require coding).  Walking through this book and building out the examples you have a great understanding and basis for moving forward as a developer on the platform.  But I was not stopping there – I needed to master it.  I had already taken DEV401 in the classroom however I decided NOT to take a formal classroom class for DEV501.  For me, doing is better than listening.  Besides there are a tons a great resources online these days to help you get started.  You have to DO the work in order to learn it.

Here was a list of the resources I used as I was learning Apex and Visualforce: (no particular order)

1) Development with the Force.com Platform, Jason Ouellette

2) Advanced Apex Programming, Dan Appleman (This really helps you start to understand writing scalable apex code – far beyond the simple concepts of managing for loops)

3) Jeff Douglas’s blog (An amazing collection of articles that help you to understand how far you can take the platform)

4) Keir Bowden’s (Bob Buzzard?) blog (Keir is truly an expert at Visualforce and especially incorporating Javascript into Visualforce)

5) Force.com Apex Code Developer’s Guide (Read through all the chapters leading up to the reference section – 15 Chapters as of Summer 14)

6) Visualforce Developer’s Guide (Once again – read through the first 21 Chapters leading up to the Standard Component Guide)

7) Apex (Salesforce.com’s own online premier training if you have access – I did this in lieu of the DEV501 class)

8) Visualforce Controller’s (also in lieu of DEV501)

9) Managing Development with Force.com (also in lieu of DEV501)

10) Visualforce in Practice (Michael Floyd, Don Robbins, et al.)

11) Visualforce Development Cookbook, Keir Bowden

12) Salesforce’s Workbooks on Apex and Visualforce

And lastly – I wrote production code.  I got assigned to projects that I would not just be managing (meetings) and architecting (drawings) but doing real role up my sleeves coding.  I refactored triggers.  I built integrations.  I used Javascript.  I built killer test coverage.  I didn’t just try to absorb the knowledge I had to apply it.

Now onto the certification itself.  This certification process has been covered in great detail by some great posts so I will not explain it here.  I will say that getting access to the written assignment on webassessor is a MAJOR pain and is like trying to get concert tickets or something.  Plan on having multiple computers, multiple browsers, lots of patience, and be ready to go the SECOND the assignment window opens.  I would recommend just grabbing any slot for your written essay, you can always change the date/time of your essay after you have confirmed your spot in the window.  I timed out multiple times trying to find the perfect appointment spot.  Just take whatever you can get the moment you can – and then change it later after the craziness has died down.

My first time taking the written assignment I was pretty overwhelmed based upon the size of the requirements and the time I had expected 20-30 hours? to take.  This may be different for you, but I thought it was a pretty good size assignment.  I hit a major platform bug using AJAX and Select Lists. (If I ever get a free minute I plan to fully document this bug and try to get it resolved).  The bug REALLY threw me off and rather than changing my architecture I decided to write JavaScript around the bug.  I will warn you right now: DO NOT USE JAVASCRIPT UNLESS YOU ABSOLUTELY HAVE TO!  I figured that my solution was not ideal but I really didn’t want to change my architecture.  I documented my reasons for using the JS in my code and on my essay.  Alas it was no good – I failed my first attempt.

My second attempt was much much easier.  Why?  The assignments were almost identical.  I was able to use a lot of my existing work on the first assignment towards the second.  (I don’t know if this is always the case).  But I will say that after taking two different assignments it is clear that there is a pretty clear grading rubric and the judges are (at least were) looking for a very specific solution.  And my second attempt did NOT require that dumb select list which was so buggy in AJAX!  (By the way the easiest work around with the bug I found was just to do a complete post back on the select list change and just forget about AJAX).

My tips for you:

1) Don’t touch your developer production org other than to deploy your code and to test what you have deployed – use the sandbox for any and all config/dev and use changesets/IDE/API to do your deployments.

2) Don’t only test with data volumes in your test coverage – take the time to create some data loader files and load some large volumes to test your solution’s scalability.

3) Validate you have met ALL the requirements, including looking for how your undeletes work.  Validate and sum up your data calculations manually.  Make sure nothing has fallen out in your scalability code.

4) Know how to scale a trigger for complex calculations (hint).

5) Know how to scale a visualforce solution for high data volumes (StandardSetControllers, SOQL offset, etc).

6) Cover every requirement that you can with test coverage.  Make sure to assert your expected behaviors.

7) Use runAs in your test coverage to test the solution using the profiles you expect.

8) Test the security of your solution in your test code.  Make sure to test positive and negative cases.

Both assignments took about 2 months for me to receive my feedback.  I’m super happy that I passed and now can get back to my real job of architecture (drawings) and management (meetings).  But now that I’m dangerous with Apex and Visualforce perhaps you may find me behind the keyboard of an appexchange app coming to a store near you?

Becoming a Salesforce Architect – A Mind Map

There was a recent article by Business Insider (http://www.businessinsider.com/resume-tech-skills-ranked-by-salary-2014-3) in which technology salaries were rated and guess who came in #1?  That’s right – the Salesforce Architect!  I must say it is nice to have focused so deeply on something that is so hot right now.  But what does it take to be a good Salesforce Architect?  In one of my past posts I wrote about my journey to the Salesforce Certified Technical Architect (here).  But does the certification guarantee you are a good architect?  It probably does due to the rigor that is required to attain it – but not necessarily.  And if you are NOT certified, does it mean you aren’t any good?  I hope not – because I was “uncertified” for a number of years that I delivered successful projects.

So what does it take?  One of the best things to know when starting a journey is where you are going.  So I put together a mind map (got to love XMind!) that described what I consider the requisite knowledge.  I hope between my earlier post and this overview that the full body of work necessary to become (in my view at least) – a good Salesforce Architect.  And I hope that once you take a look you will understand the amount of work, knowledge, and experience that is required to get to this level.  And if you decide to take this journey – I think you will be justly rewarded.  Not only in salary as the Business Insider article calls out, but also to be truly an expert in what you do.

Click on the image to see the enlarged version.  And please – if you have other skills that you think I have not listed or you think some of this is overkill…  please leave your comments below.

MindMap

My Unofficial Study Guide for the Certified Technical Architect

For more background on the whole process of CTA check out my lengthy post.

Prerequisites

– DEV401 or equivalent
– DEV501 or equivalent
– ADM201 or equivalent
– ADM301 or equivalent
– Sales Cloud Consultant Certification or equivalent
– Service Cloud Consultant Certification or equivalent
– Enterprise Technical Architecture – especially patterns for transversing from the cloud to a customers internal network
– Enterprise Business Architecture – especially identifying and managing stakeholders, business processes, and enterprise operating models
– How to talk to Salesforce (the different API options in and out)
– How to run a project (deep understanding and ability to articulate waterfall, iterative, and agile concepts)
– Lead Architect responsibilities including application life-cycle management, automated testing, continuous integration, etc
– Public Speaking
– Mobile Architecture Strategies and Differences
– Understanding of TCP/IP, SSL, x509, etc

White Papers

– Record Level Access: Under the Hood (one of my favorites – study it closely)

Developerforce

– How to Implement Single Sign-On with Force.com (Delegated Authentication)

Other blogs & resources

– REST
– All of the Technical Architect courses on the premier training salesforce portal

Videos

Other tips:

– Understand the security model and how to setup all of the different types of platform capabilities (Reports/Dashboards via Folders, Content via Libraries, Knowledge via Data Categories, Chatter via Groups, etc.)
– In the hypothetical scenario try to calculate basic volumes for  the numbers that they throw at you and any inferred data model that is designed.  Both of my hypotheticals dealt with inferred data volumes as opposed to explicitly defined data volumes
– Understand implicit sharing of the account to other objects as well as the fact that the account hierarchy does NOT implicitly grant any sharing across the account hierarchy
– Understand what happens to the role hierarchy when partner portal accounts are used (1-3 roles are appended underneath the internal account owners role)
– Understand how HVCP works and the sharing model (Sharing Sets, Sharing Groups, etc)
– I strongly recommend setting up a partner and customer community with all sorts of b2b accounts and b2c accounts and play around with the sharing features to fully vet them out
– Understand the detailed flows for OAuth, IdP init SAML, SP init SAML, and oAuth with SAML
– If you don’t fully grasp OAuth and SAML, setup your own identity provider and build out the solutions.  The light did not go off for me until I built it out myself.
– Review all of the content on http://trust.salesforce.com
 Good Luck!