How we modernize a legacy BI application for an insurance giant. Part I

Read this if you think your BI app should go to waste.

undefined
9 min read
App Modernization
Business intelligence
Digital Transformation
Insurance

The years 2012–2014 saw by far one of the greatest flops in the history of legacy software modernization. $300 M of the Federal Grant thrown to trash, “Cover Oregon” healthcare exchange failure made it to the headlines and brought about a brutal litigation process between Oracle and the State of Oregon.

A Delivery Manager myself, I dare say Oracle has gone through some hell attempting to revive “the worst online marketplace in the nation”, as they used to call the Oregon health exchange. Legacy modernization is always a challenge, and to think about the legacy update of such a scale – ouph, the Oracle guys saw some serious issues, technological ones being only of secondary nature here (more on that along the way).

On a positive note – is there a way to successfully update legacy software? Hi, my name is Anatoly Bankovsky, I’m a Delivery Manager at Symfa, and this is exactly the story of a successful legacy system restoration. This week I’m sharing with you the details of a Business Intelligence development project for our US insurance client employing over 6,000 talents worldwide. This article will 

  • shed some light on the hurdles and values of legacy modernization 
  • help you identify the opportunities for reasonable legacy modernization within your company; and even 
  • soften the harsh accusations brought upon Oracle, who for some reason couldn’t handle their case.

Table of Contents

  • Problem positioning and expected value highlights
  • The challenges of our BI system modernization journey
  • Turbulent knowledge transfer and an uneasy start
  • Find out how the story unfolds in the next part next week

Problem positioning and expected value highlights

Wouldn't it be easier to simply retire the legacy application and start anew?

Yes, but even freshly started software development projects fail. That’s what our client  – along with State Oregon – knew firsthand, with oversights happening across multiple domains: poorly documented requirements, underfunding or poor organization management. To retire and start from a clean slate for our client meant more work over the already huge effort. Modernization isn’t always justified, but in many cases – this one included – starting over means entangling into an even more dramatic experience than working with an inefficient system. 

Image 01 (2)

Reasons for software project failure. Data source: Statista


Moreover, this project held promise. The BI application market is valued at
US$ 25 bn in 2023 and projected to grow at 5.83% per year, making the application a great potential source of extra cash for our client.

My personal opinion, if you will, is that in this industry where a gazillion of software projects go to the trash bin annually, modernization instead of retirement gives some peace of mind.

This is a two-in-one BI-application and a report generating system (premiums, claims, cession sales). The application is designed for collection, processing and analysis of information and its visualization. Currently, it operates in the US only and is mainly intended to serve the needs of the contractors selling our customer’s services. Only, it’s slow on giving the business value and quick on requesting a ton of corporate resources (in terms of human hours and energy).

In order to give the project some positive dynamics from day 1, we started from a very clear problem positioning and a description of expected values. This would then give a healthy start to the vision and strategy elaboration.

Image 02 (2)

Thus, three key problems were placed before us that gave rise to a more detailed roadmap. 

With these three issues closed at lowest reasonable costs for the company, the system would start generating value, while we could proceed with tweaks and further performance improvements.

Have a look at the system key modules and functionality in our case study on this project.

The challenges of our BI system modernization journey

Enterprise-level legacy modernization creates challenges. Basically, you should do the same thing as on a general-purpose software development journey, but with more scrutiny for scalability, performance and more efficiently. On this project there were at least three:

  • find a high performing and cost-effective reporting option that would pass two rounds of corporate decision making
  • reuse denormalized data collected for a different kind of reports
  • establish new more efficient architecture using the available tools and infrastructure of the client.

Image 03 (1)

Decision making

The client has a splendid IT department with deep expertise, in business analysis and software development alike. The organization is enormous, with a rich 20-year long history. Its software has a long and rich history, too, with some unexpected twists that only old dogs can warn you about. Whenever architectural or tech stack decisions should be made, the client – very logically – has an upper hand.

  1. Decision making starts at the IT director level. In case our plan passes a scrutinous check for performance, security and cost-effectiveness, it moves on to a new level. 
  2. VPs. They don’t speak the language of sophisticated architectural approaches or all things technological. It’s the kingdom of numbers and values. If it’s simple and costs a dime, we’ll do it. This is how it works. After all, enterprise players aren’t rich because they earn a lot. Rather, it’s because they waste little.
  3. Once we’ve passed two levels of approval, time for …further discussions! The tech meetups start where we discuss in more detail the resources we need (human resources, infrastructure, tooling, etc.), development approaches, set up further consultations (with Microsoft reps, for example, who give us a hand whenever we cannot handle, say, cloud databases issues on our own). We do a lot of talking to decompose the logic bundle, too, to set up the ETL pipelines and translate our talking into the development tasks, finally.

Denormalized data

The client’s contractors – major users of the application – provide data in various formats, which should be synchronized. For this, the client has a temporary database and a whole new team in charge. We take those data that just underwent primary processing and then use it for our own reporting needs.Those data are neither structured, nor easy to read. We only take the portion that might be useful for us, without overloading our reporting tools.

Making our way through the data and business logic with only a codebase able to tell us the truth was quite a journey. (Why not check the documentation? More on that later.)

Architecture

The issue that our tech lead noticed first and foremost was the system architecture that grew too complex over time. The application was decoupled into several microservices that still used a single database, with separate code bases. This made the debugging process a nightmare, to say the least.

ETL

The ETL ecosystem is a whole new different story. They say that insurance companies are sitting on piles of data and doing nothing with them. So wrong. (Have a look at these stories of my colleague to have a better understanding of what is going on with data in the insurance industry.) On this project, ETL gave us quite a hard time due to multiple data sources and inconsistent documentation.

Turbulent knowledge transfer and an uneasy start

For this project we already had major business requirements elicited by the previous vendor. A sort of a roadmap existed, too. There were even examples of how it all works on certain platforms.

However, the documentation was unfit for our needs, it was incomplete and couldn't serve as a guide for us. A good part of it was made of work requests that together formed a rough roadmap. Even in such a form, the project documentation had a very short life span of four months. Four months after which months of undocumented changes followed.

Knowledge transfer wasn’t all roses, either. After getting a cold shoulder from the client, the previous vendor had only a month's time to hand over the system to us.

Whenever the code base couldn't answer our questions, we worked closely with the customer’s Business Analyst, who joined the project four months earlier. 

Image 04 (1)

Hampered knowledge transfer, insufficient documentation / lack of documentation, overly complex logic. How did the Symfa’s engineers still manage to get the ins and outs of the system only in a month’s time?

Well. We’ve been here before, and not for once. Almost each enterprise project you take, you start it with the system knowledge restoration. We hit every developer’s door who was engaged on the project earlier, set up meetups with BAs, collect the remnants of the documentation or anything remotely resembling the documentation (work requests, roadmaps, etc.). This ain’t nothing new for any vendor working with enterprise projects.

Find out how the story unfolds in the next part next week

Wait for the next part of the article to see our engineering team in action. In the next two parts of the article, Ilya Mokin, Head of R&D, and Sebastian Cruz, the tech lead and the architect, will share with you:

  • How we tackled the sluggish reporting
  • What is being done on the project to ensure uninterrupted data flow
  • Why documentation is important for projects with heavy data flows.

Stay tuned.

Follow us on LinkedIn and X to be the first to know about our blog updates.

Credits

Anatoly Bankovsky
Anatoly Bankovsky

Delivery Manager

Anatoly is a Delivery Manager for Insurtech business line at Symfa.

Anatoly is a Delivery Manager for Insurtech business line at Symfa.

More Like This

BACK TO BLOG

Contact us

Our team will get back to you promptly to discuss the next steps