We’re all aware that speed to market with new products and services can be a crucial business differentiator. This story is about an ongoing low-cost, bottom-up experiment conducted by a team at leading Nordic insurance firm If to examine how the business could act faster in the marketplace.
Like other market leaders, If runs the risk of becoming inert and complacent given its privileged position in the industry. So we decided to confront this challenge head on, motivated by the goal of building corporate agility. What does it look like to be truly agile in our industry? Could we move as quickly as a start-up company in putting a new offering together? And what sort of obstacles would we face?
To answer these questions we formed a cross-functional team, consisting of Måns Edsman, Agneta Liljenberg, Poul Steffensen, Line Hestvik, Mats Nordenskjöld and Anne Ramsby, and decided that the best approach was to run a simple experiment. Specifically, we set out to develop and trial a new service offering as quickly as possible, and to monitor both the process and the outcomes carefully. We decided not to do something enormously difficult, but simply to identify a process by which agility could be encouraged. In other words, we chose something relatively simple that we could sensibly ‘do’ in the experiment timeframe.
A brainstorm session in January 2011 narrowed the experiment scope. We decided to focus on the claims handling process, as this has a big impact on our ability to gain new and retain existing customer. Could we differentiate If’s claims handling by not only identifying a new service, but quickly assessing its market potential in a practical pilot test?”
We also drew up three criteria to evaluate our ideas. The experiment had to be:
- Simple: doable in a five-week project execution phase;
- Relevant: focused on customer value;
- Practical: drawing on the competence, resources and logistics available to the experiment team.
We also realized that while there is strong industry knowledge and business intelligence within If, we often faced the challenge of translating such knowledge into decisive action: to close what is sometimes called the knowing-doing gap. So as well as doing the experiment, members of our team also dug into the outcomes of recent If projects, to understand how quickly they had been put in place, and what their obstacles had been.
Prototyping a new service concept
Using our selection criteria for the experiment, we settled quickly on the idea of picking up customers and driving them to their rental vehicle. This was not a new idea by any stretch – indeed, it is one of the key marketing themes of Enterprise Rent-a-Car, the fast large US chain. But it was not currently being offered in the Scandinavian market, so it served our purposes nicely. We tailored the Enterprise model slightly, to come up with the concept of If partnering with a number of vehicle repair workshops. The offer to the customer was to deliver the car (after repair) to their home or workplace, cleaned inside and out, in return for which the customer would fill in a questionnaire on their satisfaction with the service.
By design, the experiment involved relatively little investment and only minor IT involvement. Importantly, we did not build a business case, nor did we seek broader endorsement for the idea within the company – we decided to “just do it”. To be sure, we were very aware of the cost side of the equation, but by keeping the pilot small we were able to address the business case later.
Delivering pace and agility
How quickly could we design and run the test pilot? Well, we committed to getting it done within 8 weeks, and we met this timeline (through enormous amounts of hard work!). There was no one key that unlocked this speed, but rather a number of contributory factors including, as follows:
- Senior level buy-in – We figured out that if you’re going to design an experiment that will ultimately benefit a specific business unit, you need to involve that business unit from word go and the starting point has to be at the very top. We went to the head of the Claims organisation and he confirmed that our experiment was, indeed, do-able. Furthermore, he was prepared to give us both business development and technical people to support the process and, crucially, provide us with the contact we would need with the car workshops who we initially thought we could partner with.”
- Define very early in the process all the key activities – A project plan is essential. We listed very early on in the process as many of the foreseeable activities as possible to ensure we could get what we wanted in the given timeframe. We also used this activity list as the basis for follow up and, if needed, for re-planning. We established a common team site on SharePoint and agreed weekly phone meetings, as well as a day-by-day status update between the Claims organisation that was in contact with the car workshop company and us, so that we could follow the experiment progress.
- Agree on defined roles – A strong person at the helm keeps everyone focused on the tasks in hand. We allocated clear project leaders for both the agility pilot and the benchmark study. We used the core team as a working steering group and we all contributed to the common parts of the experiment, including the preparation and conclusion. The fact that all members of the team held senior positions in their ‘day jobs’ and were highly competent meant that we had the necessary contacts to help us garner support when we needed it.
- Involve people with specialist subject matter knowledge – The resources made available by the head of Claims were crucial to the ultimate shape of the experiment. They took responsibility for clarifying the project plan both in terms of the vehicle repair workshops involved and the customers. We had gone in with the idea of using several workshops, but the Claims people felt this would complicate the pilot. They advised us to choose just one workshop. Not only that, but they knew exactly which company to approach. Their industry knowledge was important for us because it facilitated the speed and agility we were seeking. It allowed us to quickly put in place all the arrangements we needed with the workshop company. It’s a key learning when it comes to running pilots – don’t try and do everything yourself; use the specialist knowledge available to you.
- Keep it simple – The objective of this experiment wasn’t to launch something huge, but to test the processes by which agility is enabled and speed-to-market assured. So simplicity was a key design parameter: We decided to provide an extra service to 100 customers who were having their cars repaired and this comprised cleaning (inside and outside) and delivery of the car to their home or office. All the customers had to do was fill in a questionnaire about the service using a basic 1-5 scoring system and we developed a very simple form of status report, by re-using an existing IT system. Throughout this process, we worked hard to avoid over-complicating things, and this ensured buy-in from all parties.
Pilot execution and Benchmarking
Once all the plans were in place, a four-week execution phase began, with cars both cleaned and delivered by the workshop partner in close cooperation with If’s Claims people. A further week was added to the execution phase during which just the car cleaning service was offered. This process went smoothly, largely as a result of the high level of project planning and good relations between the Claims team and workshop operator taking part in the pilot.
As noted earlier, a benchmarking study ran in tandem with the broader agility experiment, after an initial survey of the project group had suggested that the company rated rather low on innovation. The benchmarking study looked at seven previous If projects as well as the ongoing agility experiment. The benchmarking criteria included the type and complexity of the project, the number of IT hours spent, whether it was championed, whether project teams were taken out of their day-to-day jobs, what the timeframe was and whether a formal project management tool was used.
As described in the previous section, the biggest challenge was to define a new service concept that was sufficiently simple that it could be defined and implemented in just two months. We met this challenge by getting senior level buy-in, by using existing resources where possible, and by continually reminding people that this was an “experiment”.
Was the experiment a success? This project certainly worked in terms of the process by which an idea is developed, piloted and considered for possible commercial adoption, and we’ll look at the success factors for this below. Interestingly, the service concept itself had mixed success – which is, after all, the point of running an experiment.
We are not going forward with the full scope of the service, because of the cost. But of course the bulk of the cost is in the delivery of the car, rather than the cleaning, so there is potential for offering the cleaning service as a standalone value-add for the customer (at no extra cost). What we’ve shown is the value of running a pilot before developing a full service model.
At the time of writing, we are still planning the next steps. A meeting has been scheduled to discuss whether the vehicle cleaning element at least can be pursued as a service differentiator, and a business owner found to take it forward if it is. As for the benchmark study we did in parallel with the experiment, we were surprised to find that there had been quite a few fast rollouts of new product ideas in recent years. So the earlier assumption that we are not an innovative company is false. We are innovative and we are fast. What we need to do as a company is believe in ourselves as innovators. There needs to be a will to carry out projects alongside our day-to-day work.
The benchmarking study highlighted three common elements of a successful project:
- Personal interest: there is a connection between failure and lack of own interest, thus personal engagement with the pilot is a clear success factor.
- Competence: a competent project team will get results.
- Broad mandate: the project team needs to be given the mandate to shape the pilot as it evolves; to take ownership.
The fact that we had the support of the head of Claims was also a positive door opener for our experiment. Interestingly, the benchmarking, along with our own experience, also revealed that an advanced governance model doesn’t necessarily add any value in small to medium scale projects. In fact, a formal tool could easily prevent creativity in these projects. Another revelation from the benchmarking is that releasing extra time for people involved in the development work is not necessarily a success criterion.
While our specific experiment involved providing a value-added service to our customers, our real purpose was to see if we could move an idea from concept to implementation in a much shorter period of time than usual. We succeeded in doing this, and we now know a lot more about how to apply this experimental approach on a consistent basis. In summary:
- Don’t spend time on the business case. If you can design and implement an experiment in two months, for a cost that sits comfortably within one team member’s operating budget, then its best just to push ahead, do the experiment, and worry about the business case later. The time saving is massive. And the learning from the experiment will vastly exceed the cost.
- Beg, borrow and steal resources. With the air-cover provided by positioning this as an “experiment”, we were able to move much more quickly, and with much less cost, than would have normally been the case.
- Senior level buy-in. Perhaps this goes without saying, but we would have spent a lot of time building support for the experiment with all the key individuals if we hadn’t already got the full support of the head of Claims. It also helped that each of us was senior enough within our functions that we could get hold of the necessary resources fairly quickly.