“Strive not to be a success, but rather to be of value.”
-Albert Einstein
I’ve been asked many a time to build a “tool” for some project. Or, I’ve been on the receiving end and told to adopt a new tool from a different team. A tool could be some over engineered Excel workbook, a small bit of R code, a dashboard, or even an enterprise scale software product. Once built, we would all like to think our tool has solved a grand problem in our organization, or has at least marginally improved the sophistication of some analysis. After deployment and a few training sessions, management then wants to know the adoption rate. This is of course a reasonable ask. However, many of us have probably experienced the disappointment when we discover there is a surprisingly low adoption rate.
In my experience, there are 4 reasons for lagging adoption. I’ve ordered the 4 causes from least to most likely. It’s not always comfortable to self reflect on failure, but reflect we must.
Scenario #1: No one knows it exists
This is a rare scenario I’ve probably run into less than 3 times. You may have just deployed the most beautiful tool out there that any actuary or underwriter has ever seen, but for whatever reason no one seemed to adequately announce its creation. Thankfully, the remedy for this is simple. Communicate! At times the actuary isn’t in charge of sending out group wide emails, or isn’t managing the internal portal. The key here is to identify early in your process how the messaging is going to work. Will the tool be announced just via email? Or, will there be a brief demo at a team’s monthly check in? Make sure you have a list for all required modes of communication and start checking them off.
Scenario #2: People are just slow to adopt
People hate change, we all know this. In a prior role I was tasked with redesigning an Excel rater from the ground up. Unfortunately, the tool I was replacing was hard wired into the underwriters’ brains. Everyone had their own pet copy of the old tool saved somewhere on their desk top. As to why people were not adopting the new tool, I’d often get excuses of “but I’m faster in the original tool”, or “but I have to spend so much time learning this new one”.
The solution to this scenario isn’t always pretty, and you need to discuss with management how heavy handed you want to be. A good first step though is clearly setting a retirement date of the old tool. You can’t just tell people “we are slowly rolling onto the new tool and we expect at each renewal you use the new tool”. Give a hard date to set expectations! Of course there will be stragglers but setting a hard date sets a goalpost to measure people against. You also must have management backing on the adoption. The worst thing you can have is someone undermine the tool and tell people off to the side they too don’t like the tool.
Scenario #3: It wasn’t built with the end user in mind
Before becoming an actuary, I went to graduate school to get a masters in teaching. I taught high school math for a handful of years before deciding it wasn’t the gig for me. It isn’t a novel concept, but one style of lesson planning I was taught was “backward design”. Essentially, you needed to ask yourself what you wanted the student to be able to do after your lesson. All too often a new teacher would think of something “fun”, or “interesting” to share with the students, but they struggled to articulate what anyone would get out of it. Just because you find something fun or interesting doesn’t mean it will have a successful outcome.
The same holds true for these tools. Us actuaries like to think we’re clever building some slick calculation, but we often forget to ask the end user what they actually need. We get so engrossed in the concept and mechanics of the tool we forget someone needs to use this thing at the end of the day.
The key here is to include your end users from the beginning. If you’re an actuary building a tool to be used by underwriters, and a good group of opinionated underwriters is not on the team, the project will fail. An exercise I’ve found particularly useful is a card sort activity. I stole this idea from a software engineer who was really good at product design.
Here’s the basic idea. Talk with your end users and go through, and list out the main things people are looking to do in this tool. In my case we made a laundry list of ~30 items. It was a list of statements like: calculate rate change, compare pricing to benchmarks, etc… Be careful though, these are not backend thing you as the builder need to technically solve, these are items the end users wants to do in the tool. It’s about them, not your right now.
From there, write each of these statements on a card. Next, sit down with a group of end users and have them sort the cards in a 3 x 3 grid. The x-axis is frequency of the task (low to high), and the y-axis is the importance of the task (low to high). This information is gold when you start to build your tool. You’ll learn the highest priority items (top right corner), and also the least priority items (bottom left corner).
Remember, the tool isn’t meant to show others how clever you are as an actuary, you’re there to solve the end users problems. With that said, lets move to the last scenario.
Scenario #4 (most egregious): It doesn’t actually solve any real problem
At first, this scenario might sound odd. What do you mean my tool doesn’t actually solve a problem? Do you know how much testing and calibration I did?
The truth of the matter is that (re)insurance companies are littered with tools, gadgets, workbooks, and widgets galore. Why any of these were made in the first place is anyone’s guess. However, the first thing you need to ask when someone else, management or whoever, asks for a tool to be built, is “What problem are you looking to solve?”.
Raise your hand if upper management has said something to the effect of “You know, it would be interesting to see X, Y, Z”? There are an infinite amount of things I would find interesting to see in my work, but interesting doesn’t equal a business problem. If you go down the path of building tools to satisfy someone’s random interest or curiosity, you’ll quickly see your tool will start to collect dust after being used a few times because it was built to answer a very shallow question. Really poke and prod people to articulate what problem they are looking to solve.
This problem exists for us actuaries as well. I am guilty of being my nerdy self and thinking “You know what, I bet if an underwriter could easy calculate [blank] that would be helpful.” What you’ll quickly find yourself doing is building a tool, but looking for a problem for it to solve. Maybe you do have a great idea and it would be helpful, but you need to go back to scenario #3 and talk with the end users.
Just look at your smart phone, how many apps or features do you actually use? If you don’t use that feature, what scenario does it fall into from above?
The best thing to do before building anything is to remember you’re trying to solve a problem. Not a random query, or fleeting point of interest, but real meaningful business problems.