Skip to content

Sharing Spring Beans between AWS Lambda Functions

Working on a huge variety of customer projects and products has exposed the Cloudsoft team to a wide range of technologies. And of course, favourites have emerged. Our preferred tech stack provides a reliable experience for our developers; when you have to build something fast it is good to reduce the number of unknown variables. However, not everything goes to plan and this can be a welcome opportunity to learn something new.

The Set Up

For example, we use OSGi in many of our projects and, although version upgrades are a pain, the benefits of having modularity and less downtime is worth it. However, at the end of last year, with the discovery of the Log4j2 JNDI vulnerability that allowed un-authenticated remote code execution, it became apparent that we should investigate alternatives.

When the Log4j vulnerability was discovered, we were in the middle of releasing two of our products after an intensive process of re-jigging our bundles for upgrading the Guava version. We couldn’t (and wouldn't) release a vulnerable update so, we had to start all over again. We had used Apache Karaf as a container for our OSGi bundles, and Apache Karaf was also dependent on Lo4j2. So, we had to wait for Apache Karaf to release an update.

Amidst all this, we also ramped up development of a new project to produce an Alpha version, and the thought of dealing with the same issues was …​ uncomfortable.

I have been working at Cloudsoft since 2019, writing books about Spring since 2014, and I’ve always praised the excellent documentation, its stability and the ease of building Spring applications. And so, a decision was made.

The Migration

We decided to migrate the new product to Spring and JDK 17, since JDK 17 was released three months ago, and it was said to be the best thing since sliced bread.

So, we’ve made use of all our knowledge of Spring and transformed the product. We’ve used Spring Boot to take advantage of out of the box configurations and used Spring Security to support authentication using JWT Tokens. Two weeks later we had a secured REST application that integrated perfectly with the old UI. The structure was simple, we had three modules:

  • Server – the main Spring Boot application
  • Core – the core classes and repository services (the ones managing data transfer between the application and the DynamoDb database)
  • Ui – a JavaScript application written in Typescript in React.

The next step was integrating it with the AWS Lambda Functions we needed to load our database with data.

The Obstacles

And this is where we hit our first obstacle. AWS Lambda Functions written with Java 17 are currently not supported because there is no Java 17 Corretto runtime available. Sure, we could have built our own container image, but because time was of the essence to deliver the Alpha version, we decided to take a step back and go back to Java 11.

We hit the second obstacle when we realized one of our AWS Lambda Function needed to use the Spring repository classes. If you’ve never written an AWS Lambda Function here’s the gist of it: the Lambda function handler is the method in your function code that processes events. When your function is invoked, Lambda runs the handler method. When the handler exists or returns a response, it becomes available to handle another event. Compared to Spring where any method in a controller class is a handler if it has @RequestMapping or any of HTTP Method specific extensions slapped on it, to create a Lambda function handler in Java you have to implement the interface, depicted in the code snippet below.



public interface RequestHandler<I, O> {

    public O handleRequest(I input, Context context);

Another thing that Lambda handlers and Spring handlers have in common is that they do not share state. However, in a Spring application, you can initialize the controller bean and inject some dependencies from the Spring application context that the handler methods can access. There is no option to do so with an AWS handler implementation.

In short: we needed our AWS handler implementation to have access to the repository beans, thus we needed a Spring application context to make sure our beans were created and initialized correctly. We already had a Spring Boot application, let’s use Spring Boot, it should be easy, right?


The Solution

We read a lot of documentation to make this happen. The problem is, nothing seemed to fit our situation. We did not want to build a serverless java container, we did not want to add in spring-cloud, we just needed a jar that could run on an already available runtime. Everything we found seemed to be overkill for an AWS Lambda function. And all we needed were those repository beans. While trying to make this happen it dawned on me. All we need is the beans, we just need to build a minimal context so they can be created and initialized properly.
This is where my experience of writing Spring applications before Spring Boot existed came in handy.

I already knew how to create a Spring application context, but creating the context every time the handler function was called did not seem such a great idea. Especially if you are trying to keep your function light and quick.

In the official documentation it says that: "When the runtime loads your handler, it runs static code and the class constructor. It also says that Resources that are created during initialization stay in memory between invocations, and can be reused by the handler thousands of times." So, our minimal Spring application and our few beans could stay in memory and be used by the handler however many times they were needed.

I had two choices: I could create the context and extract the beans the function needs into the constructor, or in a static block. In the context of an AWS Lambda function, it would be pretty much the same thing, since a single instance of RequestHandler is created. The other difference is that whilst a constructor can initialize fields, the static block can only initialize static variables. The colleague that started the code chose a static block to initialize the handler, and so I continued from where he left it.

The only issue left was to customize the context based on the active profile in the RequestHandler implementation. This class is not part of the Spring application context, so it has no idea what @ActiveProfiles is. After a little digging from the Spring documentation I found the solution: environment variables. An AWS Lambda handler cannot receive parameters, but you can configure environment variables. So here is how the RequestHandler implementation ended up looking:

package io.cloudsoft.taskchecker;

// other import statements omitted

public class TaskCheckerFunction implements RequestHandler<Map<String, String>, Map<String, String>> {

    private static OrganizationRepository organizationRepository;
    private LambdaLogger logger;

    static {
        /* 1 */ var profile= System.getProperty("");

        /* 2 */ var ctx = new AnnotationConfigApplicationContext();
        /* 3 */ ctx.getEnvironment().addActiveProfile(profile);
        /* 4 */ ctx.register(AwsStepConfig.class, MockStartStopLambdaConfiguration.class, StartStopLambdaConfiguration.class);
        /* 5 */ ctx.refresh();

        /* 6 */ organizationRepository = ctx.getBean(OrganizationRepository.class);

    public Map<String, String> handleRequest(Map<String, String> stringStringMap, Context context) {
        logger = context.getLogger();

        String orgId = stringStringMap.get("organisationID");
        Optional<Organization> optionalOrganization = organizationRepository.findByUid(orgId);
        if(optionalOrganization.isEmpty()) {
            logger.log("Could not find organization " + orgId + " deleting step function");
            return Map.of("error", Boolean.TRUE.toString());
        // ... else do the useful stuff


Let’s analyze the contents of the static block:

  1. In this line the profile is read from the environment variable.
  2. In this line an empty Spring annotation context is created. Java configuration classes provide the configuration, so the org.springframework.context.annotation.AnnotationConfigApplicationContext is used.
  3. The profile is added to the current list of active profiles for the Spring application context. Since we have only one profile ctx.getEnvironment().setActiveProfiles(profile) would have worked too.
  4. In this line the configuration classes are registered. The official Javadoc says that the register method is used to "Register one or more component classes to be processed". This means that this method can be used to add any class annotated with @Component or any others in the stereotype annotation family, including @Configuration that is used on Java configuration classes.
  5. To process the classes added by calling the ctx.register(..) method, which means in this case picking up the bean configurations and creating and initializing the beans, the ctx.refresh() must be called.
  6. In this line, the bean the handler function needs to use is extracted from the Spring application context.

Three configuration classes are provided as arguments to the ctx.register(..) and each of them is explained below.

  • AwsStepConfig.class contains the AWS configuration properties necessary to create a io.cloudsoft.taskchecker.DynamoDbClient bean. This class is customizable via profile, which means the properties injected in this class are going to be read from the property file for the activated profile. Notice the ${} placeholder in the second @PropertySource declaration.

package io.cloudsoft.taskchecker;

import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.context.annotation.PropertySources;
// other import statements omitted

        @PropertySource(value = "classpath:/appeco-${}.properties",
            ignoreResourceNotFound = true)
public class AwsStepConfig {

    private String awsAccessKey;

    private String awsSecretKey;

    private String awsRegion;

    public DynamoDbClient dynamoDbClient() {
        var credsProvider = ...; //

        return  DynamoDbClient.builder()


  • The MockStartStopLambdaConfiguration is a configuration class with repository mock beans that are used in a test context. It is annotated with @Profile(value = "mock"), so when a different profile is activated, this class and the bean definitions contained within its body are ignored by Spring.

package io.cloudsoft.taskchecker;

@Profile(value = "mock")
public class MockStartStopLambdaConfiguration {
    OrganizationRepository organizationRepository() {
        return new MockOrganizationRepository();


  • The StartStopLambdaConfiguration is a configuration class configured for targeted component scanning. This means it is annotated with the @ComponentScan and the annotation is configured with the package names where the bean definitions are expected to be present. This is useful to limit the context where Spring will look for bean declarations, resulting in a faster start of the application and a smaller application context. The class body is empty, but bean definitions specific to the implementation of this handler can be added if necessary.

package io.cloudsoft.taskchecker;
// import statements omitted

@Profile(value = {"prod","local"})
@ComponentScan(basePackages = {"io.cloudsoft.repo", "io.cloudsoft.service"})
public class StartStopLambdaConfiguration {

Could all these annotations be placed on a single class? Yes, but keeping scope specific configurations decoupled makes for a more concise configuration and helps developers understand the purpose of each configuration just by reading the class name. Also, it provides an answer to the question: Where are these beans coming from?

Lessons Learned

Spring Boot is great for building working applications fast, but the speed comes with the cost of having a bloated application with a lot of dependencies that you are probably not used to in your project. This is not a problem for a backend that does the heavy-lifting, but when you want to write a small, compact application such as an AWS Step Function, Spring Boot is overkill. This is the main reason that you won’t find any examples of Spring Boot AWS Step functions out there on the internet, not because it cannot be done, but because it makes no sense to do it.

In our case, we just needed to reuse some existing code, to get access to the functionality that was part of our Spring Boot application. The good design of our project allowed us to import only the Spring components we needed and use them outside a Spring Boot application.

I am sure we are not the only ones that needed something like this and I hope you will find this useful.

As a conclusion, Spring Boot is great, but small Spring applications with minimal contexts are greater. Also, fast and compact AWS Lambda Functions can be written with Spring, you just need a good understanding of Spring.


About Iuliana

Iuliana Cosmina is a Senior Software Engineer at Cloudsoft. She has also authored several books on Java and Spring for Apress, including Java for Absolute Beginners, Pro Spring 5, Pro Spring MVC with WebFlux and has a forthcoming publication on Java 17 for Absolute Beginners (2nd Edition).


Related Posts