On the Trail to TrailheaDX

Look at you, all full of pep, spunk, and other synonyms. You are a week away from going to TrailheaDX, the Salesforce conference dedicated to developers. Are you all ready? Do you know what ready is? I sure didn’t, so I decided to do some research and share the fruits of my labor. Let’s run through what to do to make the most out of your trip.

Packing away

Packing smart

Packing is probably the first thing most of you thought of when you thought of preparation for the conference. It’s the same thing you do any time you have a trip that lasts more than a day. The key, however, is to pack smart and that is what we are here to do.

Let’s talk fashion, baby. What we want, at the core of our clothing choices, is “practical comfort.” Start with your shoes; you aren’t some houseplant, you are going to be all over the place going to sessions, meeting people, and snapping some sick selfies #ImStillCool. Make your shoe choice comfortable, broken in, and clean. Your legs and feet will hate you for your most stylish loafers or high heals, but you won’t be making many friends with your well-worn gym shoes either. Carry that thought through the rest of your clothing choices. Pants and tops should be comfortable to wear while still being nice enough to network in. The weather averages for this time of year swing between high 40s to low 60s, so keep personal tolerances in mind.

Now that you have all your clothes ready, time to talk tech. The laptop and power cord are obvious, as is the phone and charger, but wait, that’s not all. If you plan on squeezing as much out of trailhead as possible, you will want a portable battery pack to keep your phone going throughout the day. Make sure it has the ports you need for the gear you have. If you have a laptop that can run off USB-C, your battery pack can pull double duty running that as well. Keep in mind, though, that you should try to keep your pack below 100Wh. Generally this amounts to about a 27000mAh battery. If you really need more juice ( I’m looking at you, person with a MacBook, Switch, and a phone) you can get pre approval for packs up to 160Wh but I usually try to just go with the path of least resistance.

That’s it, for this section at least, right? Ha, I’m still writing, so open that suitcase back up. Take a look: you have your clothing, tech, and some essentials like your toothpaste, toothbrush, hair brush, deodorant, etc. all neatly tucked away. You have made maximum usage of your available space and couldn’t fit a thing more. In any normal circumstance I would give you a high five, but this time you’re wrong. Lean in closely and I shall whisper tales of vendor swag. Wonderful branded treasures await you and you need a place to put them on the return trip. Nobody wants to choose between returning with their socks or their Astro plush.

Planning your days

You have a lot to do and so much to see. In fact, you have too much. There is just too much, more than you can possibly experience in such a short time. There are over 150 sessions to cram into two days so you’re going to have to carefully plan. If you are a paper-and-pen sort, you can use the online schedule to draft up your days. If you want some help, use the Salesforce Events app (available on Play Store and iTunes). You can log in with your registration ID and last name. Make sure the right event is selected, there could be multiple events happening around the same time.

If you are going as a group, you can experience some of the event surreptitiously. Get together as a group and see if you can come to an agreement dividing people among different sessions in the same time slot. Don’t be a tyrant about it; make sure your group members are cool with their choices and are OK taking notes during their sessions. At some point, trade notes and thoughts with each other.

Networking, but like… with people

Do you plan on mechanically going from talk to talk to talk? Of course not! This will be a few days surrounded by your peers so you better get ready to rub elbows. You can actually do some prep work here. What sort of social media presence do you have, could it use some cleaning up?

  • LinkedIn – Keep things professional here. Stick to the facts and details pertaining to your career and industry. Make your posts regular and informative.
  • Twitter – You can get a bit more personal here but try to mind your audience. Have an appropriate picture and header image.
  • Trailhead – Earn some badges, get some points, raise your rank. This can help people see what you are interested in when it comes to Salesforce. A quick note about the rank: don’t be weird about it. Everyone moves at their own pace, a higher or lower rank does not intrinsically mean anything.
  • Facebook – Short answer: no. Long answer: Most people use facebook for their most personal interactions. Here is where you post the pics of you doing some crazy stuff, the inside jokes with your BFFs, and tons of family talk. Ask yourself, “Is there anything I don’t want a potential coworker seeing?” Maybe your trailhead connections can graduate to this level of closeness but let’s not start here.

Think about getting some business cards with your social media urls and handles on it. You can put a phone number on it if you want, but that sounds like a good way to end up with telemarketers.

Maintaining the facade of health

Jeez, one day into the conference and you look like heck. What went wrong? Have you been eating well? As a self professed carb-ivore, I could live on rolls and sweets but that would not leave me feeling well after a while. You may be in a new place, but try to keep your eating as normal as you can. How about sleep? The first day of the conference can theoretically last from 8AM to 11:30PM. Add in some jet lag and you have a recipe for foggy head and eye bags. You can’t control everything but try to get a solid block of sleep while seeking out some peace and quiet throughout the conference. You can only run wide open for so long, so take some time to slow down. Let’s also make sure that knowledge and swag are the only things we bring home. Pack or pickup a thing of hand sanitizer and remember to actually use it.

Well now what

You can’t stay forever, whether you participated just the conference or the bootcamp as well, eventually you will return back from whence you came. However, that doesn’t have to be the end of your TralheaDX experience. If your company sent you, consider hosting a presentation of your own! Consider a best-of, or a focus on something you found interesting. If you have the time and the place, write up a blog post for others to enjoy your take on things. Comb through your contacts and connect with the people you met. Head to social media and thank the presenters for their time and talent. If there was a presentation you really wanted to see but it conflicted with a presentation you really had to see, wait for the TrailheaDX playlist to pop up on youtube.

On your way

Now you look ready to go. You know how to pack to keep your sanity and your merch. You have an idea for what you are going to see and when. You have an idea for how to keep healthy out on the conference floor. You have a game plan for when you come back, bursting with sage-like knowledge. I’d say you are ready to hit the trail!

How to Involve Yourself with Salesforce (Outside of Salesforce)

If you want to get involved in the Salesforce community, the default answer is usually the forums. It isn’t a bad answer, but it isn’t a complete one either.  I can tell by the way that you are reading a third sentence into this blog post that you are a person who wants more. Reading this fourth sentence tells me you’re ready for more. You see, there is a wealth of opportunities to interact and get involved in ways that help yourself and others around you. Ways that can cater to your skills or help you develop new ones.

Stack Exchange

The most natural way to get involved is through asking and answering questions. Sure, you can do the exact same thing in the forums but the Salesforce Stack Exchange is a highly tuned machine built for that exact purpose. Good questions and answers are rewarded with reputation, and greater reputation affords you greater privilege. There is also a gamification aspect in the form of badges you can earn for certain accomplishments. What that all boils down to is a tremendous gathering of knowledge and skill. Now, full disclosure, I voted and advocated for the existence of the Salesforce Stack Exchange so I may be a bit biased in my fawning, but they earned a lot of good will from me in the past. The quickest way to get started is to sign up and ask a question or give an answer. I would suggest taking a bit more of an observational approach at first. To maintain a high level of quality on the site, the community moderators are swift, and to a newcomer, can seem cold or off putting. Just read through some questions first, checkout the how to ask a question and how to answer a question guides, and just be a good person.


Do you have enough skill to do something? Really anything at all? If so then you have what it takes to involve yourself in GitHub. GitHub is a massive development community based around the distributed version control system called Git. OK, you do need the ability to use Git, but there are tools that make it pretty darn easy. Once you are setup with an account and have the required software installed, start digging around for Salesforce projects that interest you. Even if you don’t count yourself that strong a developer, you can still contribute. If you can write good well, maybe documentation is your thing. If you know more than one language with a high proficiency, some projects could benefit from translations. If you have an eye for user experience and interaction, perhaps design is up your alley. Maybe you just want to use the software, that is fine too. You can get involved by writing detailed bug reports if you run into one, or suggesting thoughtful features.

User Groups

These are wonderful options for people who like people. Now while there are regular user groups, I can only talk for developer user groups. Typically there is food and drink with a speech or two. These speeches can cover a wide variety of material and the format can range from informational to a hands on tutorial. This is a great way to meet local developers who share a similar interest and skill set. If you want to get even more involved, give a speech yourself. The research involved in giving a speech often times will strengthen you in that particular area and the speeches themselves can be a great resume booster. Now that I have talked them up, the best place to go to find them is probably (at the time of writing) Meetup.com. Make an account and start scouring your area. Because this is a group activity it does skew towards cities more, but give it a try anyway.

Getting Out There

There are many more ways to get involved with the Salesforce community, I have just highlighted a few that I know and love. A well placed question or answer on Stack Exchange can be a real life saver to someone in need. Participation in Github can allow you to either flex your existing skills or learn new ones. Going to meetups lets you meet and befriend real people in real life. Whatever you do, get out there and get involved.

This post also appears on LinkedIn

Finding What’s Hot and What’s Not with geolocation

Let’s see, clear some cobwebs, dust off some settings, check that it was indeed over a year since my last post. Yep, every thing seems in order. To kick off this post-a-versary, how about a quick exploration of Salesforce’s built in geolocation features, now that this data has been automatically added to the standard address fields!

A quick primer for the uninitiated, geolocation information is the location of something on the globe. Instead of using streets or landmarks, it uses latitude and longitude to find a location’s exact position. With this information at our disposal we can start to look at data on a geographic level. To illustrate this, we will look at a fictional company and how they can use geolocation to better outfit their sales people.

Path Skull

Introducing Path Skull, a fictional manufacturer of outdoor goods. They sell to retailers instead of directly to consumers so their sales people need to work to get their product into stores. To help them out when courting leads, they are going to use geolocation to figure out what sort of products sell well in a particular area. There is no need to show off your rugged sneakers when there is only demand for boots or your triple insulated sleeping bag when the market just wants to keep cool at night.

A clean sweep

All of this will depend on having the latitude and longitude of your accounts and leads. You do have that populated, right? No? That is fine because with Salesforce and Data.com it is all automatic, all you need to do first is to enable it. Go into the setting for your org and locate the ‘Clean Rules.’ For our examples, we will be activating ‘Geocodes for Account Shipping Address’ and ‘Geocodes for Lead Address’ but please activate whatever works for your use case. For more information, check out the official Salesforce release notes for the geocode clean rules.

Button Up

A new button on the lead will be the point of entry to this new functionality. Leads have a built in address field that will become the center of our area to report on. First off we will create a new visualforce page with the lead object as our standard controller and a new apex class to use as our controller extension. In the constructor of our extension we will use a distance query to find all the accounts within a 100 mile radius.

For this example I am just hard coding 100 miles, but for greater flexibility, consider a custom setting with the ability to change it on the results screen. Next we will grab the top five assets and products as well as the bottom five for the previously found accounts. The top five is the obvious one but the bottom five could just as well help you with what products not to even waste your breath on.

Once you have your data, it is up to you to display it in the most relevant way on your page. To wrap it all up, add a new button to the lead which opens the visualforce page and place the button on the lead’s page layout. With just a few lines of code your sales people now know what a lead may be interested in from the outset. How easy was it? With comments, the source for the controller turned out to be only 18 lines long.

The visualforce page to display the data as a pair of lists was only 15 lines long.

This was a simple example but with location-based SOQL queries and the location class, making use of location doesn’t have to be hard.

Using Gulp to Manage All Your Static Assets

The web development community continues its march towards more advanced methods and processes. CSS preprocessing, JavaScript testing and linting, image compression, and much more make up the web development landscape. Where does that leave you, humble Salesforce developer? Your JavaScript, css and images all get loaded into static resources and that’s it. Surely there is no way to integrate the Salesforce way with all these cool, new tools, right? Prepare to be mildly interested.

Now, before we get started, I am going to make a number of assumptions:

  • You are semi-comfortable with the command line
  • You have your own development environment. For reference, check Jesse Altman’s ideal explanation and for more depth, read the other articles he has in the series
  • You have your Salesforce code and metadata pulled down to your local machine for development
  • You have some way to deploy code (MavensMate, Force.com IDE, Ant migration tools, etc.)

Initial Gulp setup

The first part will be getting Gulp set up. To begin, you will have to download and install Node.js for whatever system you are using. After that is all set up, you are going to want to go into your Salesforce project and make a sibling folder to src. This is where we are going to load all of our assets, so name it appropriately. After that, make a folder for each asset type you are going to work with (i.e. js, css, img) and if you are going to use something like Sass or Less, then add a folder for that.

Next we are going to open the asset folder in a command line and enter the following commands:

The first command will bring up a series of prompts where you can enter some information on your project. The next two actually install Gulp.

Next we need to create gulpfile.js in the same folder and add these few lines

That file will set up the server (for now) so that typing the command gulp and navigating to localhost:8080 in a browser will show your asset directory.

Managing your dependencies

If you depend on a number of third party libraries, you may want to consider a package manager. If you don’t want this, it is not necessary for the article but it can be a huge help. I will be using a tool called Bower. Bower lets you search and download dependencies, much like npm, but with a focus on the front end. To get started with Bower, first we must run

What you actually install at this point is up to you, but for example I will install jQuery and Normalize.css. Like with npm, Bower has an init command that will prompt you for various values.

Local files, now with more cloud

Now that we have our files served up, we can use a bit of trickery to load from our little web server when developing locally, but then use the static resources in all the other environments. The basic premise of this is using a flag in a custom setting, something like Use Local, and either rendering a component with all the local assets, or the static resources. There are a lot of ways to actually lay this out, but for me, I will be using two sets of components so my CSS loads in the head and my JavaScript loads in the footer.

Before we even get to writing the components, we need a few things in place. First is the custom setting so you can turn the feature on and off. Next is some dummy static resources. Now this depends heavily on what you are going to do with your assets but for me, I plan on concatenating and minifying my JavaScript and CSS. I plan on zipping third party assets into their own file. For my purposes I will create a blank js file, css file, and zip file and upload it to the static resources. Once I get that done it is on to the components.

This design was originally from this GitHub repository for a Visualforce AngularJs Seed. The design was later refined by Peter Knole.

The story so far

If you have been following along you have all the bits and pieces set up. Now you need to put it together in a page and write up your CSS and JavaScript. The page can be anything, so long as it has the components for the CSS and JavaScript at the top and bottom respectively. Don’t forget to set the custom setting to true so you use your local files. If everything went well, then viewing your page should work. If it doesn’t, check that your browser isn’t blocking mixed content . If you really want to test, make a change to your CSS, save, then reload and see that your page changed without any long save times.

Example local page
I am not a designer

Building for now

You now can view your work in your own sandbox just fine, but what about everyone else? So this tutorial doesn’t become a colossal pain, we will be setting the custom setting to false to simulate the remote environment. Now, viewing the page shouldn’t look so good. That is because we haven’t done anything about the static resources. We are going to fix that, but we need a bit of help. First grab a few more dependencies:

Now for the script

To break that down, lets look at the tasks that were just added. The zip-vendor task  gets every file and directory in the bower_components folder, zips it up and puts it in the static resources. The process-css, and process-js task gets all our CSS and JavaScript files, concatenates them into their own files, minifies them, and then places them in the static resources. Once that is all set, run gulp build to rebuild the static resources. After that, you just need to get the files to the server however you are comfortable. For example, if you are using MavensMate, just run the compile.

Building for the future

There is so much more you can do than what I laid out. The tools and techniques of web development are constantly changing and shifting. This article was done so you can see a possibility. Nothing I have done is set in stone or even scratches the surface. The makers of jsForce, for instance, have a post I took inspiration from for deploying assets right from Gulp. There are also a wealth of tutorials on these tools as well as many others that can be added to your workflow.

I say to you now, welcome to the present age of web development, we’ve been expecting you.

Technical User Standards Design Experience Interface Documentation

If I had ever thought I would be doing a post on documentation, I would have made sure to have more topic starters in the back log. Let me put this out in the open, I hate writing documentation. Now that that is out of the way I feel empowered and what better way to express that than by judging all of you. I am going to guess that a great many of you hate documentation as well. Now don’t get put off by that statement or think I am belittling anyone in particular. I have seen it everywhere, on projects big and small, proprietary and open source, useful and utter garbage. Even my own git hub projects languish with bad documentation. Actually writing documentation, it seems, is farther down on my todo list than writing a blog post about writing documentation. Part of the problem is that once you start getting serious about documenting it appears that everyone has different ideas of the style, scope, and purpose. With every single sample different, finding a consistent thread of what a specific type of documentation is for, becomes an infuriating mess. If you are like me, this turns into mental gridlock as you agonize over how far you go with the details.

Let me paraphrase a joke I once heard: A group of developers were going over all the different formats their new media player would play. They narrowed it down to a list of 13 formats. Staring at the mess they thought to themselves, “Hey, we’re smart people, we should make a new format to overthrow and unify these other formats.” Then they had 14 formats to code to.

I mention that, because I will not be calling out one style of documentation above the rest. No, I will be adding my voice to the din. Just as a forewarning, I have no place telling anyone how documentation should be done. That being said, I will divide the documentation into three segments. The first will be the foundation, the second will be the stories, and the third will be the generated docs. They have been split based on their audience and their goal.


I call this the foundation because it will have information pertinent to the entire project and is readable by everybody, from the managers to the developers. The goal here is to be only as technical as we need to be. To use a Heroku project as an example, this area will hold information like what technology is being used, what coding standards should be followed, what certification the app must meet, and any other project wide promises the app will be held to. Much of this paragraph is built off of section five in the book, Software Architecture for Developers. If your project is using MySQL instead of PostgreSQL, that goes here. If your project is being held to five 9s, god help you and that goes here. If  you have a specific task to submit your app for PCI certification, then that goes in the next section but the fact that it must be PCI certified goes here. So what is the next section?


I love stories. No, seriously, a well written user story is a wonderful thing to behold. I am basing this section off of my experiences with Jira, but I can think of no better place for documentation about what the project should be then right in the Jira stories themselves. These should contain all of the information about specific functionality that has been promised. The arrangement works out perfectly as all the information, including screen mocks can be added to the story. Questions and comments can be added as well to turn it into a living document, recorded for posterity. This should be a boon to QA testers who only have to look in one location to find out all the information on what a task should and should not do.

Code Documentation

While the above described what should happen, the code comments should communicate how. I am not talking about commenting every line of code, that is pointless and prone to error. Your code documentation should be twofold. First is structured comment blocks, ones that can be parsed by documentation generating tools, placed at the beginning of classes and at least on all public API. You can get the private methods to help out maintainers (and yourself) in the future. The next is inline comments, ONLY when there is a need to explicitly call out what you are doing. To use a real life example, to get JSTL referenced correctly in a recent project, I had to add in three lines of what can only be described as code voodoo. I added a comment stating not to remove it carelessly, why it was there, and a link to a page I got the piece of code from.


All told, with these three components there should be a complete picture with very little overlap or duplication. I say should because documentation is a soft skill. If you have bad code then you can get compilation errors, or test failures, or regressions in QA. Bad documentation can take a long while to surface, if they ever do. The side effect of bad documentation can be confusion, inefficiency, and delay that is hard to quantify but easy to feel. So whether it be a grain or a fast food serving worth salt, take this plan into consideration.

MavensMate Vs Force.com IDE

Are you looking for a no holds bared, pulse pounding, head-to-head match up between two established titans!? Then I am sorry, this is just a comparison between two code editors. I am sorry to have wasted your time. If, however, you came here looking for the type of fuel that can keep a flame war burning for way longer than it should have, then have I got a post for you. First let me frame this for anyone who isn’t in the know.

The force.com IDE was a plugin for Eclipse built on the metadata API. If you wanted to do any code editing out of the cloud, this was about your only choice. It was rough at times; tests were slow to the point of useless, there were a few missing features that people had come to expect from code editors, & it used the whole weight and heft of Eclipse to essentially act as a text editor with some fancy saving. MavensMate came along and, for some of my coworkers and myself, completely blew Force.com IDE out of the water. It was light, it was fast, there was code completion, there was an intuitive test results screen AND you could play games during a deploy. How cool is that! I suppose the only drawback is that the text editor it was a plug-in for, Sublime Text 3, is $80. That is not too bad considering how well loved Sublime is, but it is still something to keep in mind if you are going to be rolling this out to a large number of developers.

Ok, history lesson over. If you can’t tell by the fact that the above fight seems kinda lopsided and yet I am still writing, something has changed. The Force.com IDE has been released as open source and based off of the tooling API. Right off the bat this scores big with me since I am also a Java developer and can better understand what the code does. It is still an Eclipse plugin so that complaint still stands but lets see if it can wow us, shall we?

The Force.com IDE install process remains mostly unchanged and that is fine, as the Eclipse plugin manager is stable and robust enough to handle this easily. The project setup screen is also very familiar, though I did have to resize the ‘Choose initial project contents’ dialog  for the components to show up . It has a pile more choices for metadata types it can pull down, but it has always had that advantage. The tests were definitely faster, which was nice to see though the coverage report is lacking some polish. Two things I could not get working were the automatic builds and the code completion. The automatic builds caught me off guard as that was a feature that used to work. The code completion was something I was looking forward to working, but no dice. At this point I uninstalled and reinstalled, but it didn’t get any better.

When I set out to write this post, I wanted to be surprised. I will remember to be more careful with how I word my wishes. I was indeed surprised by how unimpressed I was. Compared to the old version, it did not feel like a significant improvement, or or an improvement warranting a blog post, or an  improvement at all. So I probably sound pretty down on the updated Force.com IDE, and I am, but I am hopeful. Releasing the source and making it much more possible to extend is a huge step forward. As it stands MavensMate still owns the Force.com IDE, but I will be happy to do another comparison in the future to see if the lead disappears.

Here comes NimbusMock!

Oh look, a git hub repo. That’s right, my mocking framework for apex has a name and a repo. Yay? Right now it is a 0.0.1 release. That means there is lots of chance for the api to change (I am still trying to decide whether to split out the object mocking and method stubbing). For now let’s see what it has to offer.

Class Setup in your Application

Before you can even hope to mock anything, there is a certian way to go about setting up your classes. The service classes must be setup with an interface. The point of the mock will be to override the default functionality with the mock methods. For any classes that are going to use these mocks, they must be setup so that the mocks can either be set in the constructor or a setter.

How to Mock

The first step, assuming your classes are all ready, is to make your mocks. In MockFactory.cls, create an inner class that extends NimbusMock and implements your interface. Your methods will simply call getCall with a unique name for that call in that class and an object array of your parameters. The following is a references for how to mock your calls:


Using your Mocks

Creating an instance of your mock is as easy as any regular class. For example:

From there, you need to tell the mock what to return or throw when a call is made with some combination of parameters. The base is the same in both cases, calling the when static method on NimbusMock and pass it the call to the mock method with the parameters you expect. After that you either chain thenReturn or thenThrow depending on what you want it to do. When put together, it looks something like this:


  • A generator for the mocks. I plan on writing this in apex and letting the devloper write up a script calling it with a list of classes they want mocked
  • The ability to set how many times the same call can be made and to see how many times a call is used after the fact
  • Mocking classes without interfaces. This isn’t that hard, I just need to settle on how I want to go about it

I am keeping good on my word

Like the title says, I am keeping good on my word to get some stuff done. I have been working on my mocking framework to post it to github. There have been a number of changes. Probably the biggest noticeable change is the switch to a mockito-like when+doReturn style. Along with this, I no longer use a static mock manager which means that there are less chances of method name conflict. The down side to this is that the mock now needs to be injected via constructor or property as the mocked calls are specific to the instance created in the test. The mocks have also been softened and now allow for out of order method invocation. There are still matchers but they have changed a bit. It is very much a work around for the lack of reflection and I expect them to change in the second release. In fact, there is a laundry list of features I want to add, but I need to get something out there first before I start planning where I will put the kitchen sink. Now, some may be wondering why I have not put anything in github and the reason is, is that this was a major overhaul from top to bottom. I didn’t want some broken piece of code attached to my name. Right now it is 90ish percent done. The code is working, but I want more tests and some clean up. You see, I do most of this at night… or the morning as it just so happens (12:11am as of this sentence) and while I produce working code, I also produce code with a high WTF/m at this time. So once I have it cleaned up, tested and commented, I will push it to github and make a big ol’ post about it here.

I’m not dead

I feel happy. I feel happy…

No grail fans? I thought I might use a dead parrot parody but in that sketch the parrot had most definitely ceased to be. “So where have you been,” you may be asking yourself and to you I reply, “that is none of your business but I will tell you anyway cuz I’m a nice person and I am sure you are too.” I moved, settled in, the steam sale happened…. yeah, anyway. I want to get back to posting and I would also like to update my site a bit. It seems to be lacking, well, imagine me frantically waving at the screen and you get the gist of it. No social icons, no about me and a bear bones theme leaves this looking like a fly-by-night hack job rather than someone genuinely interested in this technology. I also want to get my Mock framework out on git hub. I will admit, when I heard that someone else had beaten me to the punch with a few features I would have liked to use, I got a little down on my code. Well that was wrong of me because you know what is better than one good implementation, TWO! Let’s keep forcing the issue and maybe, just maybe, Salesforce will throw us a reflection bone or something, anything to make this mocking thing less clunky. A guy can hope right? Anyway, lets see if I can coax this blog back to life a’ight. See you all soon!

Enhanced Mock Mocks

Well color me surprised, my last article on mocking in apex was, well, a little more popular than I thought it would be. I was happy that so may people enjoyed it, but it got me thinking, “Bob, you handsome and brilliant, yet refreshingly humble man, you can do more for your fans!” After an obligatory back patting, I set off to add more features. This will build entirely off of the last article. In fact, you are doing yourself a disservice if you don’t read it post haste.

We start with a change to add parameter matcher, a handy feature for times when you are not 100% sure the value that is going into you method. All we need is an enum.

This is the bare minimum to work while still providing room for enhancement. The only thing in this enum is the value anyVal. What we are saying is when this is used, accept any value. Next I modified my interface, services and test to give the method getAuthCode a Datetime parameter. The OrderService will pass in Datetime.now(), making this almost impossible to mock without matchers. There also needs to be a change in Mock maker to handle this new condition.

It isn’t much different than it was two weeks ago, except now it checks if the parameter is a matcher. If it is, and since there is only one possible value it could be, we just skip any check on the parameter and continue to the next. Now we can change our test to work with this new code.

Now our call to getAuthCode will work no matter what we pass in. A quick test run shows both tests passing… but that isn’t right, now is it? We only changed one test yet both still pass. It isn’t enough to simply check the parameters, we need to check the number of them as well. Otherwise our loop could give false positives like in this case, or throw exceptions. Let’s put a halt to that right now.

One extra line asserts that the number of parameters passed in is the same as those expected.

There is another case we are still not testing for. If you remember last time, I used mocks to return a purposeful bad value to test out some error handling. What if we wanted to test what happens when the service we are mocking throws an exception itself? We need some more code for this one but before that, we need an exception to throw.

Phew, that was a lot of work, but I soldier on. Next is some code in our service to expect such an exception.


Now we need to get MockMaker to throw the exception at the right time.

This getCall method keeps getting all of the attention. Another conditional now checks if the return value is actually an exception and if so, it throws it. The only thing left is the test.

The only real difference here is that the addCall for processPayment has an exception passed into it and the end assert reflects the caught exception.

With just a bit of work, MockMaker now checks the quantity of parameters, uses matchers to check for values that might not be known when the test is started, and throws exceptions on command. While not on par with some other language’s mock suits, it can go a long way to making life easier for a developer.