Revitalizing Legacy Applications: A Cloud Migration Approach
by Mingfu Qin, Lei Yang
Between March 2018 and November 2021, I worked with a product team migrating about a dozen applications to Azure. By the time I left the team, all but one had gone to the cloud. We have utilized multiple migration strategies, including total rewrite, lift-tinker-and-shift, containerization as-is, retire, and in the last scenario, leaving the service on-premise for the time being.
I started as a tech lead on the first migration, which was a total rewrite, and our first application was deployed into an Azure Kubernetes cluster. Later on, while other managers and teams continued working on new features in the first application and rewriting another user-facing application, I worked as an individual contributor for the second major migration project. The application had about 175 APIs (210 with different versions) and eight clients. It was a mix of JEE and Spring Framework. I spent four months researching and coding to make it deployable as both a WebLogic app and a container app in Kubernetes. Along the way, I documented every step I did for the project.
I used the same methodology to migrate two other legacy API applications. Instead of me coding, I brought in other engineers to the game. One was a fifteen-year-old JEE API application that had many internal and external partners. I acted as a manager leading a senior and a junior engineer, a very strong one for sure, to architect, develop, and finalize the migration. It took about 15 months to completely move all vendors to the Azure instance and shut down the old WebLogic instance. This experience proved the process is scalable to multiple engineers. The other project was a backend application that only had queue listeners and Quartz jobs, all 72 of them. I started a quick POC to separate the single application into smaller services and let four engineers work on them separately. Each listener was moved into its own container and Quartz job to Kubernetes CronJob.
During the same time, I also containerized a few other smaller applications. Although these components were being deployed to WebLogic or JBoss, they had been developed as Spring Boot applications locally. Putting them into a container and starting them as a plain old Java application was all that was needed. I also migrated a couple of services to Kubernetes simply by finding proper Docker images from the Docker hub. I managed most of the pipelines of no-downtime week-day deployment. I also managed infrastructure changes like F5 VIPs and DNS changes with support from a network team.
The evolution of both Spring and the Infrastructure simplified migration tremendously. Ten years ago, it was unimaginable to migrate so many applications into Azure or other Kubernetes clouds in such a short time. I’m here to review and share this journey. Hopefully, it will give other developers an idea of how early decisions impacted the results.
I will first cover several important decisions we made for the user-facing application rewrites. Like everything else in engineering, decisions are made based on certain tradeoff points. User-facing applications have the highest chance of gaining support from the organization for a rewrite. A newer and easier user experience is always welcome; requirements change with the market all the time; a quick idea-to-market cycle is demanded. Along with the new user interface, the latest technologies can be used and applications can be written from scratch leaving the most complicated business logic behind APIs from legacy backend services. On the other side, business-facing applications normally are suitable for an upgrade instead of a rewrite. The required training and workflow change may risk eroding customer loyalty in many cases.
Then I will focus on migrating legacy backend services. Such services usually had been developed as SOA applications. They have rich business logic and are core to the business. When a change is needed, the application name is often followed by the name of a person who is the only one who knows how to make the change. Management can’t estimate the risk of upgrading to cutting-edge technologies, and engineers are afraid of being stuck in legacy keywords that nobody wants to hear. However, since these applications are so critical to the business, nobody can get away from them while they become harder and harder to maintain. Finding a way to revitalize these services so the new features can be added with the latest framework brings light to the hearts of the developers and also simplifies the development of clients of these applications.
The Rewrite
For the first rewrite project, there were a few major decisions that benefited the effort in the long run.
The first and most important was to move to cloud-first and then make it cloud-native. The decision reduced the complexity of the project drastically. At the time, all of the people working on the project were new to both cloud and microservices. We had been in monolithic mode, and we had only three months to finish the project. Redesigning data models may be straightforward enough, but migration of existing data to the new model is another story. Moving to Kubernetes without changing fundamental data models only gave the partial benefit of the microservices architecture, but it laid a foundation for the team to get familiar quickly with the new infrastructure and started seeing the benefit of blue/green deployment and auto/easy scaling.
The second and biggest decision was to make the services simple Java applications, namely Spring Boot applications. This decision made the development much easier and enabled the pipeline to evolve in the correct direction. We got rid of traditional application containers and packaged the applications in a Docker container. Developers used to monolithic applications are often afraid of throwing away Tomcat, JBoss, or WebLogic. But these application servers now have so much overlap with Kubernetes that it could make things a lot more complicated.
The newly defined user interface provided a nice mapping from the desired API and actual backend services and data. Clearly defined API requirements and microservices design made parallel development possible. We were able to onboard ten consultants both in the US and India to expand the team and finished 56 APIs in six weeks.
The management provided two trainings for developers. One was for a three-day workshop on Microsoft Azure and Kubernetes. Looking back, the workshop did establish the foundation for all lead engineers to move toward the cloud solution. Another training, or more specifically, a ten-day Jenkins pipeline design walk-through. My team met with a consultant one hour a day and I constructed a new Jenkins pipeline for the rest of the day. The outcome was a solid foundation of the CI/CD pipelines still in use today.
We had also deployed the same Docker images in both on-prem VMs and Azure clusters as we weren’t sure if the Azure cluster would be ready for production within three months. The DNS name was changed almost at the last minute to go with the cluster instance to conclude a successful rewrite story.
However, rewriting projects could have a lengthy development cycle if not managed closely. The second rewrite project lasted about two years before it was finally 100% in production. When new user experiences were designed, feature priority had to be re-defined. When requirements change, the development cycle changes. In the meanwhile, the corresponding legacy application was containerized and migrated to Azure for about two weeks of man-hours to accept waving demands from users during the pandemic.
The Lift-Tinker-and-Shift
The applications I took on this approach had many common traits of a legacy application. The original requirements were outdated. The original developers were no longer around. New developers had no skills in old technologies and didn’t have a lot of interest in acquiring them either. A small feature enhancement would take days or weeks, developers had to go down the old way of coding: digging down layers and layers of code to make a simple business logic change. Meanwhile, these applications were mission-critical. Rewrite would take hefty integration tests with clients.
On the upside, these applications were all backend, with restful APIs, jobs, or message listeners.
These traits lend well to Kubernetes, i.e., we can respond to volume change easily, either with auto-scaling or with manual adjustment. Scaling up or down in a Kubernetes cluster takes only a few minutes while setting up a whole new virtual machine and configuring it as a new managed server in a WebLogic cluster takes hours if not days. Once the new server is set up, it is rarely scaled down when the volume dissipates.
I will focus on the rest of the article on a migration taking this approach and may go down to the details of each line of code change. However, the main process was: I defined a migration path. I collected testing data for APIs. I cleaned up unused APIs and POM dependencies. I picked the easiest change first to gain entry to the current codebase. I started with an upgrade of a minor version of the Spring Boot. I converted Spring’s XML configurations to Java annotations. Then I performed a major upgrade of Spring Boot and Spring Framework. I then got rid of EJB. Once EJB was gone, I removed all dependencies on WebLogic resources, like data sources and work managers. I added the pipeline that deploys the app into Kubernetes. I used the integration tests collected to verify the changes and deployed the application at major releases to patch the regression tests. I requested load testing specifically for the migration as well. Once all the clients were migrated, we gracefully retired the WebLogic instances.
Case Study
The project I discussed here was an application that supported about 175 APIs, 210 with different versions, and eight clients.
Define a Migration Path
I had a vague idea of what needed to be done to containerize this application, but I knew it could take quite some time. Meanwhile, the main development team would continue making and releasing minor changes. In other words, I needed to migrate the application while maintaining the maintainability of the application.
I knew three typical migration paths. The first one is to migrate only the APIs still used by the clients, moving the APIs one by one and asking the clients to use the new endpoints. This approach is preferred when the services have fewer clients, and the scope of both services and clients is clear. It leaves a very clean API set, as it naturally retires the unused features, and it has the highest chance to generate reusable services. This approach can also lead to a faster development time, as it is easy for developers to utilize the latest technologies, e.g., microservice architecture, the latest JDK, and the Spring Framework.
The main drawback is that it is only suitable for those well-documented projects with a limited number of clients, or with a steady code base that no longer changes much. This approach also requires robust integration tests with support from clients. It is easy to start on this approach but it is hard to finish. Development teams often end up maintaining both new and old code bases for a long period of time.
The second option is to upgrade the application as a whole, basically upgrading the application to the point of Kubernetes-ready with a minimal amount of code change. Once the Azure instance is tested well, we would update the load balancer to point to the Azure instances. All the clients will be migrated at the same time. None of the clients will need change on their side. This approach is well-suited for services that have many clients and mobile applications. The approach also provides a clean rollback plan, as we can revert the DNS/load balancer to the existing instance. This option creates less interruption to business. It also allows new features to utilize new technology.
The drawback of this method was its complexity which limited feature development. It was not a major obstacle in my situation since with our new frontend, every new feature was developed in the new microservices.
I chose the second option for my first migration.
Collect Test Cases
Here a proper log aggregator like Splunk is important. If the application logs all requests and responses, generating a large amount of test cases is much easier. If the application does not have such logs, I would suggest adding logging to the application for a day or two in production to gather real data.
The step I took on the legacy application was to remove the APIs no longer in use for the past 30 days based on metrics gathered by Splunk. It reduced the migration effort, and increased code cleanliness. It required a tech lead to review all the APIs to be removed since some of them were used for monthly or yearly reports, these APIs could be easily spotted. For every API to be migrated, at least three integration tests were created, then verified all integration tests passed with the existing code base in the Dev environment.
Choose the Code Change Merge Path
I started by copying all the code from the old application GitLab repository to a new one. One day, my manager raised a concern about code merging… “How are we going to merge the ongoing changes from the existing application to the new one? Are we going to copy files every time and manually merge them?”
There were two solutions as it turned out, one was to create a branch in the old code repository. This approach provides a history of changes. The other was to fork a new repository, and then later disconnect the two. Git can memorize the merges and changes for you, so you don’t need to compare and merge every time. Also, you can add a remote fork after you create the project. That’s powerful.
I picked the branching method so other engineers could easily spot my changes.
Create a New Target-State Application
I followed a blog to create a Spring Boot Jersey application and copied the full stack of the simplest API into it. Then I ran the new application to make sure it passed the integration tests for this API. The purpose here is to understand this particular application, how the new code structure can use the existing code, and how much effort it would be to make the change.
As a side note, the original engineers repeated 200+ of try { getConnection() } catch () {} finally {closeResources(stmt, rs)}... It’s a lot of discipline to catch everything to avoid resource leaks. With this upgrade, we can start using Spring Data; at least the new code won’t need to worry about the leaks.
In the new application, I included classes like DTO, DAO, EJB, IMPL, and the Controller. It turned out this API depended on a job that used an EJB. That led to removing the EJB by making the EJB Remote implementation class a POJO with @Component annotation. The DAO classes used @Resource which was replaced with @Autowired of the datasource. I left the old JDBC connections, PreparedStatement, and other JDBC codes in the original state. There were quite some code constructing parameters and extracting output. Furthermore, the code was mingled with some business logic. I chose to leave them as-is to reduce the risk.
The new application did reveal a target state of the migration, and from there, the parts that need to be updated are almost all defined. The rest is to implement these changes across all APIs.
Convert to a Spring Boot 1.5 Application in WebLogic
The application was on JDK 8, Spring 4.3.4, and Jersey 2.25 provided by Weblogic 12.2.1. Spring Boot does have a spring-boot-starter-jersey, and it is on a newer version of Jersey. The first try was to add Spring Boot parent POM to the application’s pom.xml. I took a small step by adding the parent POM of Spring Boot 1.4.2, which has Spring 4.3.4. Surprisingly it worked. Then I updated the Spring Boot version to 1.5.11, and it worked again. Looks like minor version upgrades for Spring Boot aren’t that scary after all. At this point, even though the pom.xml included the Spring Boot pom file, the application still referenced Spring dependencies directly.
Reduce Dependencies
Legacy applications often have dependencies added to the project without a lot of governing. Making the dependency tree clean saves a lot of time during upgrade. With the integration tests in place, it is safe to do this cleanup.
I added spring-boot-starter-jersey with the version of the WebLogic jersey and made the scope of Tomcat dependency to ‘provided’ in the WebLogic profile. The change triggered a heavy dosage of Maven dependency tree tracing since mixing Jersey Server 2.0 and Jersey Client 1.19 turned the dependencies into a big maze.
The upgrade of <jersey.version>2.27</jersey.version> brought my first major blocker from WebLogic. WebLogic 12.2.1 has Jersey 2.21.1 as an implementation of JAX-RS 2.0, so I had to disable Jersey from WebLogic. After a whole day of tracing of Maven dependency tree, the final solution was from Using Jersey 2.x web service on Weblogic 12.1.1. A big list of packages had to be disabled in WebLogic. I had also tried to use <prefer-web-inf-classes>true</prefer-web-inf-classes> but gave up once I saw all sorts of Method Not Found exceptions, which normally indicated there was something wrong with the class loading.
While fighting with the dependencies, I also tried to upgrade the Jersey client to 2.27 using all kinds of Eclipse regular expressions Search and Replace. I couldn’t figure out a simple way to post an empty body after three days of fighting with this. I finally left the branch alone and decided to fight with the dependencies instead. Decision made: Do NOT upgrade Jersey-client 1.91 to 2.27. The code changes were super intrusive and the usability of Jersey Client 2.x was even worse than version 1.xx. I can’t blame the people who gave up on this over-engineered specification. I prefer RestTemplate for sure.
The conflict of the dependencies between application-server-provided jars and Spring-provided jars was a big pain for all three applications I migrated. Each had different failures and errors. The knowledge of class loaders helped here. I was not sure how many young engineers would ever need this knowledge, especially for the application developers. Spring isolates application developers so well that they rarely need in-depth knowledge of the Java virtual machine and its class loaders. In the end, I still did not completely understand how those WebLogic class loaders worked, but the deployment of wars started working and so did the liveness servlet.
The First Milestone - Release to QA
With all integration tests returning the same responses as before the refactoring, I released the build for QA regression. Since there was no business logic change, the regression passed fairly well without defects and the build was deployed to production. Git merged my refactoring code and minor code changes from other developers without problem as well. The accomplishment of the release confirmed the migration path from a WebLogic application to a Spring Boot application.
Recommended by LinkedIn
Prepare Profiles for the Separation of WebLogic and Spring Boot Applications
Since my goal for the first project was to start the same application as a Spring Boot application and a WebLogic application, I created two profiles in pom.xml, Spring Boot and WebLogic, for two sets of configurations.
The major difference was that the weblogic.jar was included only in the WebLogic profile, while the embedded Tomcat and ActiveMQ broker were enabled for the Spring Boot profile. The datasources, JMS connection factories, and queues were obtained via JNDI in the WebLogic profile. In the Spring Boot Profile, these resources were created through Spring Boot starters' configurations.
Upgrade to Spring Boot 2
Upgrading Spring Boot to 2.x forced the upgrade to Spring 5. The indicative error was a “KotlinDetector.isKotlinReflectPresent() does not exist” message. Then I put <spring.version>5.1.10.RELEASE</spring.version> into pom.xml. Javax.servlet-api had been upgraded to 4.0.1 due to an issue with @WebListener Annotation Example
The upgrade worked after only a few errors whose solutions can be found in StackOverflow. I can only say Spring did a good job of providing smooth upgrades.
Convert to Annotation-based Configuration
Java annotation-based configuration is more developer-friendly than XML configurations. WebLogic 12.2.1 supports Servlet 3.1 which makes web.xml optional. The process is well documented in multiple blogs, e.g., “How-to” Guides and Spring app migration: from XML to Java-based config.
The project had a web.xml, an ApplicationContext.xml, and a services-context.xml in a shared folder. The web.xml had a few pieces: a servlet for Spring’s dispatcher with its context loader listener, a few WebListeners, a few servlets for Jersey endpoints, a Login filter for security, a PropertyPlaceholderConfigurer to load properties from the database, a bean to use AOP pointcuts to log performance of method calls, a task scheduler, and a list of resource-ref’s for WebLogic work managers.
The Spring dispatcher and its ContextLoaderListener used by Spring Dispatcher were replaced with SpringBootServletIntializer since it registers this listener by default. See more at Spring boot convert web.xml listener.
@WebListener is a Servlet 3.x annotation, which is not managed by Spring. It can be easily converted to Spring’s @EventListener which listens to ContextRefreshedEvent.
The LoginFilter which was annotated with @WebFilter had been marked with @Component and extended OncePerRequestFilter to favor the regular Spring Security filter. A new WebSecurityConfig.java then utilized this filter to provide the authentication and authorization services.
The Jersey endpoints were converted to ResourceConfig and Jersey resources were registered with their explicit classes, instead of being scanned by packages. It was due to a Jersey defect in Glassfish, Spring Boot and Jersey produces ClassNotFound.
A CommonConfig.java had been added in a shared library submodule to replace services-context.xml, and @Import(CommonConfig.class) has been added to the war/ear application that depends on this shared jar. Two beans in the services-context.xml, a task scheduler, and a PropertyPlaceholderConfigurer, were converted to two beans created in CommonConfig.java.
I created one single SpringBootApplication main class for the application and converted all war and ear into a single jar. This was purely due to my lack of experience. In all later projects, I would convert each WAR to its own Spring Boot application, and deploy them as separated containers. The wars in an EAR may share common libraries, but it is better to honor the already separated smaller-sized wars, instead of creating a big monolithic Spring Boot application from the ear.
The task executor in this application was used for scheduling asynchronous jobs. This particular task executor looked up the JNDI entries of the WebLogic work managers which served as a pool manager of these jobs. The WebLogic work managers use CommonJ APIs. The JNDI lookup for the work managers simply does not work without the XML configuration and resource-ref’s in the web.xml. It does look like WebLogic 12.2.1 supports Java configuration for many things like datasources and servlets, but the work manager support is not here. I couldn’t figure out how to replace the XML before a QA regression started, so with everything else changed to Java-based annotation, these work managers were left in web.xml before the 2nd milestone.
At the point of the 2nd milestone, everything but the work managers in web.xml has been converted. applicationContext.xml and service-context.xml were gone.
The Second Milestone - Release to QA
Again with all integration tests returning the same responses as before the refactoring and no business logic changes, the regression test passed fairly well without defects and the build was deployed to production. The release confirmed the migration path from the WebLogic application to Spring Boot 2.x with Java annotation-based configuration.
Replace EJB Transaction Management with Spring
The application chose EJB for its transaction management at the time, but now Spring gives that almost for free, see @EnableTransactionManagement in Spring Boot.
The project, like many other legacy projects, had a base class, BaseDAO, that provided utility methods to open/close connections and resources like PreparedStatement or ResultSet. The Implementation of all the EJB remote interfaces used WebLogic @CallByReference and @Resource to look up JNDI entries of the data sources.
To start the conversion, two datasource configuration classes were created. One was to create datasource beans from properties and the other was to look up WebLogic datasource by JNDI. They were used by Spring Boot and WebLogic profiles accordingly.
All the DAO classes were annotated with @Repository, and since the original constructor took a parameter of datasource, the parameter was annotated with @Autowired and @Qualifier (there were two data sources defined in the WebLogic profile).
The DAOs in the original code were created using new. To avoid potential multithreading issues, they were also annotated as a prototype Scope. In the Implementation classes, the @EB3, @Stateless, @CallbyReference, and @Resource were replaced by @Service, and @TrasactionAttribute was replaced with @Transactional. In all places that refer to @EJB remote interfaces, the annotations were replaced with @Autowried.
I have met a fun trade-off in this conversion and may apply to all of these conversions. The replacement was indeed straightforward: simply copy & paste or use some regular expression in IDE. The only problem was the number of repetitions. The project had nearly 200 APIs with an even larger number of DAOs. Immediately, I thought of a tool to do this repetitive work. However, after a few hours of Googling for one, I only found a few over-engineered libraries for my use case. I tried to write a tool using shell script and Perl script. But a few hours later, it was still far from perfect. I thought of a suggestion from a young engineer in my team, that maybe copy and paste isn’t a bad thing even for a few hundred times. I decided to settle down and complete this boring repetitive work. After four hours, the code was compiling successfully again. The other projects that I migrated had fewer APIs to work on, yet I still wish for a tool to do all the work automatically. It may not be worth the time still. This tool only makes sense when an industry level of migration starts, which yes, we can start one, but I was, after all, on my first migration project.
The power of the Java interface demonstrated itself. All remote Interfaces were kept intact, and Spring auto-wiring worked very well.
At this point, the old WebLogic application could be started as a Spring Boot application in IDE. Now I started getting into runtime issues.
The first and most significant one was that the ‘new’ method was used explicitly everywhere. The objects created by “new” were not managed by Spring, thus the life cycle of the Spring beans did not apply to these objects. That is, @Autowired values would not be injected into these ‘new’-ed objects. A hack was to add an ApplicationContextAware implementor, SpringBeanGetter, so that we could get the application context anywhere and retrieve the needed beans for use in the place of these new’ed objects. This was a compromise and a bad design pattern. But the code change to register all classes with the ApplicationContext would be too intrusive and I’d rather minimize code change for the migration.
Finding ‘new’ in the code and replacing them was another repetitive work. But with integration tests guarding the failures, it didn’t take long to find them all.
Now I went back to the Work Managers by WebLogic. At first, I was hoping to find an implementation of CommonJ that did not depend on WebLogic. Soon I realized that @Async just did the job and it was much easier with the SpringBeanGetter to get the task scheduler. Things just worked out pretty well.
MDBs were a straightforward conversion by enabling @EnableJMS and using @JMSListeners.
The project used two WebLogic queues for implementing @Async, and there were no requirements for persistence, so I enabled embedded Spring ActiveMQ for the purpose instead of requesting new Service Bus queues in Azure or traditional IBM MQ queues.
Convert Quartz jobs to Kubernetes CronJobs
Quartz jobs have been around for a long time, but it’s heavy-weight because of the required database configuration. Spring Boot CommandLineRunner along with Kubernete CronJobs provides an easy alternative. If a job is thread-safe, it acquires auto-scaling capability for free from Kubernetes.
There were a couple of points to pay attention to in this conversion. Kubernetes CronJobs can only schedule to the minute, and it does have a clock-precision issue in that the status checks for a job are by default every 15 seconds. For those jobs that require higher accuracy, you may still want to stay with Quartz. Spring Boot CommandLineRunner does not always wait for @Async tasks to finish, so the application code has to be changed to wait for these tasks to finish. Or you can simply remove @Async if possible.
Deploy to Kubernetes
At this point, almost all the WebLogic dependencies were removed. However, I noticed a strange dependency on weblogic.jar while testing. After digging into it, I found the storyline appeared to be that an engineer used a ParseException from a WebLogic package (bea.xxx), most likely because it was the first option in the context help. It should have been Java. text.ParseException. The offending method was a utility and imported to other utility classes, and then “throws ParseException” were added to these calling methods. Later on, other engineers were confused and started to catch both exceptions and add throws to more methods. The wrong ParseException polluted nearly 10 other classes and forced dependency on weblogic.jar to all modules. We could blame the first engineer for being careless, but it’s also a ramification of Java’s checked exception handling. Let’s move to Spring’s unchecked exceptions. After removing the wrong ParseException references, the Spring Boot application started without weblogic.jar.
Deploying to Kubernetes was straightforward as we had about 20 other microservices in Azure already. The pipeline simply ran Maven to build a runnable jar and create a Docker image from the base openjdk8 image with the jar, then push the final image to an Azure ACR. A Helm chart was deployed to the cluster, Kubernetes loaded the image, and yes, a new pod was started and running.
It was also necessary to add a Spring Actuator into the application so the liveness check of the pods works nicer. With Spring Boot in place, this was just adding a spring-boot-starter-actuator.
Logging code and configurations were also cleaned up. In Kubernetes, it is recommended to use stdout or stderr to stream out logs so log aggregators like Splunk can gather them easily. While in on-prem VMs, the logs were normally stored as local files with a rolling schedule. The different behaviors could be configured in Docker so it wasn’t a blocker.
Cacerts and other security measure agents were also put into base Docker images such that all microservices were in sync.
Thread limits were another subject to research and they depended on the usage of the pod. You could set the limit based on pod resource limits and auto-scaling preference. For REST API pods, a horizontal autoscaler is preferred. The thread limits could be small and we lean on the autoscaler to provide more capacity.
The Third Milestone - Run as Spring Boot and WebLogic Application
At this point, the pipeline was deploying the same code base to both the WebLogic server and Kubernetes cluster on Azure. QA regression was on the WebLogic instance, while integration tests were running against both the instance on Azure and in WebLogic. This way, the regression tests verified the business logic was sound and working as expected.
The achievement of this release was migration from WebLogic to Kubernetes.
Migrate Clients
In the following releases, the internal clients were asked to change their configuration to use the Kubernetes endpoints, especially the ones already deployed into the same cluster, thus enjoying the same level of security and low latency.
For external clients, a new VIP was set up in the Kubernetes F5, and the DNS of the existing endpoint was updated to the new VIP to the Kubernetes. The clients were switched to the Kubernetes instance transparently.
Summary
WebLogic and other JEE containers provide a one-stop solution for applications that require a host of resources, e.g., a set of connections to the database, message queues, and work managers, a transaction boundary and deployment model as EJB, and a set of load balancers for high availability.
However, features from Spring Framework and its Spring Boot can not only replace, largely, what’s offered by WebLogic nowadays but they also come in with much better usability. The Spring Boot starters, like spring-boot-starter-data, spring-boot-starter-jersey, and its built-in thread pool management, can be enabled with only a few annotations. Meanwhile, Kubernetes takes care of the infrastructure concerns like clustering, load balancing, scaling, and high availability, which is outside of business logic. The combination of Spring Boot and Kubernetes simplifies the modernization of the legacy yet mission-critical applications and provides an affordable path for their revitalization.
Executive Director, AI and Digital Technology at Quest Diagnostics
1yGreat work!
Manager software Engineer at Quest Diagnostics
1yExcellent work mingfu. Very well documented!
Digital Health leader and doer.
1yAlways enjoyed watching you make magic happen Mingfu Q.!
Vice President, Information Technology
1yI remember our teams working together on some of these apps, we were all learning K8s at the same time and trying to stay ahead of you on the infra side. Good post Mingfu, brought back lots of fond (crazy) memories.