Code Review Videos > Java > Java Guest Book Example [Beginner Spring Boot] – Saving To Postgres

Java Guest Book Example [Beginner Spring Boot] – Saving To Postgres

In this post, we’re continuing with our Spring Boot guestbook example, but this time we’ll be adding persistence by integrating a PostgreSQL database. While the overall process is fairly straightforward and well-documented, there were a few things that caught me off guard along the way.

One challenge I faced was integrating Spring Data repositories with an already decoupled domain layer. If you’re like me and have structured your domain logic to be independent from any specific frameworks, you might wonder how to connect the dots without introducing tight coupling between the domain and the infrastructure layers.

We’ll be using Docker to host our PostgreSQL database. I set this up right at the beginning when I first initialised the Spring Boot application, but it’s worth revisiting that setup as a starting point for this post.

What Will You Learn In This Post?

In this post, you’ll learn how to integrate PostgreSQL with your Spring Boot guestbook app, focusing on practical steps and some troubleshooting tips I learned along the way. The key topics are:

  • Docker for PostgreSQL
    Set up and manage a PostgreSQL database using Docker Compose for local development.
  • Spring Boot and Docker Compose
    Enable Spring Boot to automate Docker Compose services like PostgreSQL for a seamless local setup.
  • PostgreSQL with Spring Data JPA
    Learn how to add PostgreSQL persistence with Spring Data JPA, including essential dependencies and configuration.
  • Decoupling Domain Logic
    Keep your domain layer independent from infrastructure while integrating Spring Data repositories.

By the end, you’ll have a fully functional guestbook app with clean architecture and database persistence.

Spring Boot Docker Compose Integration

For those unfamiliar, Docker provides a simple way to containerise applications and services, like PostgreSQL, making it easy to spin up a database instance locally without installing anything directly on your machine. This can be especially helpful for maintaining consistency across different development environments.

I’ll admit, one thing I’m definitely guilty of is getting a bit trigger-happy when setting up a Spring Initializer project. I like to tick the boxes for things that sound interesting. But, to be honest, a lot of what I choose is based on a very high-level understanding.

spring initializer add docker compose

For example, I know a good deal about Docker, but I had no clue—before diving in—how the integration worked with Spring Boot. If you’re in the same boat, don’t worry—it’s a common experience when starting out. The good news is that learning by doing is a great way to fill in those gaps.

When you select Docker Compose Support in Spring Initializer, you’re enabling Spring Boot to integrate directly with Docker Compose. This is useful if you’re running multiple services, such as a database, message queue, or other dependencies, alongside your Spring Boot application. Definitely overkill in this example though.

It generates a specific entry in your pom.xml file also:

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-docker-compose</artifactId>
      <scope>runtime</scope>
      <optional>true</optional>
    </dependency>Code language: HTML, XML (xml)

Docker Compose allows you to define and run multi-container Docker applications. With this option enabled, Spring Boot will be aware of the services running in Docker containers defined in your docker-compose.yml file. It helps automate the configuration of those services, making it easier to connect your application to them.

For example, if you have a PostgreSQL database container defined in your docker-compose.yml, Spring Boot can automatically manage the lifecycle of that container—spinning it up when you start the application and shutting it down when you’re done. This feature simplifies running local development environments, especially when working with databases or other services that your application depends on.

I’m sure it does more than this, but that’s the depth of my knowledge so far.

One of the most curious aspects about adding this dependency is that the newly generated project won’t actually boot any more:

 :: Spring Boot ::                (v3.3.3)

2024-09-17T18:08:25.892+01:00  INFO 523926 --- [demo] [           main] com.example.demo.DemoApplication         : Starting DemoApplication using Java 21.0.2 with PID 523926 (/home/chris/Downloads/demo/target/classes started by chris in /home/chris/Downloads/demo)
2024-09-17T18:08:25.894+01:00  INFO 523926 --- [demo] [           main] com.example.demo.DemoApplication         : No active profile set, falling back to 1 default profile: "default"
2024-09-17T18:08:25.925+01:00  INFO 523926 --- [demo] [           main] .s.b.d.c.l.DockerComposeLifecycleManager : Using Docker Compose file '/home/chris/Downloads/demo/compose.yaml'
2024-09-17T18:08:26.402+01:00 ERROR 523926 --- [demo] [           main] o.s.boot.SpringApplication               : Application run failed

org.springframework.boot.docker.compose.core.ProcessExitException: 'docker compose --file /home/chris/Downloads/demo/compose.yaml --ansi never config --format=json' failed with exit code 15.

Stdout:


Stderr:
validating /home/chris/Downloads/demo/compose.yaml: services must be a mapping

	at org.springframework.boot.docker.compose.core.ProcessRunner.run(ProcessRunner.java:96) ~[spring-boot-docker-compose-3.3.3.jar:3.3.3]
	at org.springframework.boot.docker.compose.core.DockerCli.run(DockerCli.java:80) ~[spring-boot-docker-compose-3.3.3.jar:3.3.3]
	at org.springframework.boot.docker.compose.core.DefaultDockerCompose.hasDefinedServices(DefaultDockerCompose.java:71) ~[spring-boot-docker-compose-3.3.3.jar:3.3.3]
	at org.springframework.boot.docker.compose.lifecycle.DockerComposeLifecycleManager.start(DockerComposeLifecycleManager.java:112) ~[spring-boot-docker-compose-3.3.3.jar:3.3.3]
	at org.springframework.boot.docker.compose.lifecycle.DockerComposeListener.onApplicationEvent(DockerComposeListener.java:53) ~[spring-boot-docker-compose-3.3.3.jar:3.3.3]
	at org.springframework.boot.docker.compose.lifecycle.DockerComposeListener.onApplicationEvent(DockerComposeListener.java:35) ~[spring-boot-docker-compose-3.3.3.jar:3.3.3]
	at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:185) ~[spring-context-6.1.12.jar:6.1.12]
	at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:178) ~[spring-context-6.1.12.jar:6.1.12]
	at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:156) ~[spring-context-6.1.12.jar:6.1.12]
	at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:138) ~[spring-context-6.1.12.jar:6.1.12]
	at org.springframework.boot.context.event.EventPublishingRunListener.multicastInitialEvent(EventPublishingRunListener.java:136) ~[spring-boot-3.3.3.jar:3.3.3]
	at org.springframework.boot.context.event.EventPublishingRunListener.contextLoaded(EventPublishingRunListener.java:98) ~[spring-boot-3.3.3.jar:3.3.3]
	at org.springframework.boot.SpringApplicationRunListeners.lambda$contextLoaded$4(SpringApplicationRunListeners.java:72) ~[spring-boot-3.3.3.jar:3.3.3]
	at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[na:na]
	at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:118) ~[spring-boot-3.3.3.jar:3.3.3]
	at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:112) ~[spring-boot-3.3.3.jar:3.3.3]
	at org.springframework.boot.SpringApplicationRunListeners.contextLoaded(SpringApplicationRunListeners.java:72) ~[spring-boot-3.3.3.jar:3.3.3]
	at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:433) ~[spring-boot-3.3.3.jar:3.3.3]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:334) ~[spring-boot-3.3.3.jar:3.3.3]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1363) ~[spring-boot-3.3.3.jar:3.3.3]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1352) ~[spring-boot-3.3.3.jar:3.3.3]
	at com.example.demo.DemoApplication.main(DemoApplication.java:10) ~[classes/:na]


Process finished with exit code 1Code language: Shell Session (shell)

Because although the generated project comes with a compose.yaml file, the file contents are invalid. Kinda odd.

spring boot default compose.yaml file invalid

So we need to add at least one service in here to make the Spring Boot project actually bootable.

As an aside, I’d say this – for me – is where the Dot Net developer experience is nicer. Generally that side of the world seems a lot more noob friendly than Spring Boot. But hey ho, here we are.

Adding A Postgres Database

There are a few steps we need to take to add persistence to our project using PostgreSQL. The first step is to declare our PostgreSQL service inside the docker-compose.yml file.

Here’s a basic example of how to define the Postgres service:

services:
  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: app_user
      POSTGRES_PASSWORD: password
      POSTGRES_DB: app_db
    ports:
      - "5655:5432"Code language: YAML (yaml)

In this set up we’re pulling the postgres:16-alpine image from Docker Hub. I’ve had some issues with the official Postgres 16 image, and besides that, alpine images are generally smaller in image size which is never a bad thing.

We set up environment variables to define the Postgres user, password, and database name. You can use whatever you like here, but for simplicity these are the common values I always use for my own personal dev work.

We’re mapping the Postgres port 5432 from the container to be publicly available on our local machine at 5655. Change the port mapping to whatever you like. I tend to have a bunch of projects that use Postgres, and so I throw in any random number to avoid conflicts between setups.

With these additions, we should now be able to start our Spring Boot project at the very least.

 :: Spring Boot ::                (v3.3.3)

2024-09-17T18:51:49.512+01:00  INFO 539902 --- [demo] [           main] com.example.demo.DemoApplication         : Starting DemoApplication using Java 21.0.2 with PID 539902 (/home/chris/Downloads/demo/target/classes started by chris in /home/chris/Downloads/demo)
2024-09-17T18:51:49.514+01:00  INFO 539902 --- [demo] [           main] com.example.demo.DemoApplication         : No active profile set, falling back to 1 default profile: "default"
2024-09-17T18:51:49.548+01:00  INFO 539902 --- [demo] [           main] .s.b.d.c.l.DockerComposeLifecycleManager : Using Docker Compose file '/home/chris/Downloads/demo/compose.yaml'
2024-09-17T18:51:50.374+01:00  INFO 539902 --- [demo] [utReader-stderr] o.s.boot.docker.compose.core.DockerCli   :  Network demo_default  Creating
2024-09-17T18:51:50.431+01:00  INFO 539902 --- [demo] [utReader-stderr] o.s.boot.docker.compose.core.DockerCli   :  Network demo_default  Created
2024-09-17T18:51:50.432+01:00  INFO 539902 --- [demo] [utReader-stderr] o.s.boot.docker.compose.core.DockerCli   :  Container demo-postgres-1  Creating
2024-09-17T18:51:50.475+01:00  INFO 539902 --- [demo] [utReader-stderr] o.s.boot.docker.compose.core.DockerCli   :  Container demo-postgres-1  Created
2024-09-17T18:51:50.476+01:00  INFO 539902 --- [demo] [utReader-stderr] o.s.boot.docker.compose.core.DockerCli   :  Container demo-postgres-1  Starting
2024-09-17T18:51:50.645+01:00  INFO 539902 --- [demo] [utReader-stderr] o.s.boot.docker.compose.core.DockerCli   :  Container demo-postgres-1  Started
2024-09-17T18:51:50.645+01:00  INFO 539902 --- [demo] [utReader-stderr] o.s.boot.docker.compose.core.DockerCli   :  Container demo-postgres-1  Waiting
2024-09-17T18:51:51.147+01:00  INFO 539902 --- [demo] [utReader-stderr] o.s.boot.docker.compose.core.DockerCli   :  Container demo-postgres-1  Healthy
2024-09-17T18:51:52.671+01:00  INFO 539902 --- [demo] [           main] com.example.demo.DemoApplication         : Started DemoApplication in 3.337 seconds (process running for 3.627)

Process finished with exit code 0Code language: PHP (php)

It’s not much, but it’s progress.

You should also be able to see that Spring Boot did take the container offline when the app stopped:

$ docker ps -a
CONTAINER ID   IMAGE                                                       COMMAND                  CREATED              STATUS                          PORTS                                                                                       NAMES
aea6bde3f14a   postgres:16-alpine                                          "docker-entrypoint.s…"   About a minute ago   Exited (0) About a minute ago                                                                                               demo-postgres-1Code language: Shell Session (shell)

Unfortunately the output from docker console commands are designed for people with 32″ 5K screens or larger, so they don’t tend to copy / paste that nicely into tiny WordPress blog posts.

That takes care of our database server, but we still need database connectivity.

Adding Postgres Connectivity Dependencies

Adding a PostgreSQL service in Docker is only one part of the process. To make your Spring Boot application actually communicate with the database, we also need to include two important dependencies in the pom.xml file:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>

<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <scope>runtime</scope>
</dependency>Code language: HTML, XML (xml)

But why are these necessary?

spring-boot-starter-data-jpa

This dependency brings in Spring Data JPA, which is Spring’s abstraction over JPA (Java Persistence API). It simplifies database access by handling much of the boilerplate code that you would otherwise have to write to manage database transactions and queries.

  • JPA is a standard for ORM (Object-Relational Mapping) in Java, allowing you to map Java objects to database tables.
  • Spring Data JPA builds on this by providing repository interfaces, so you can perform CRUD operations without writing SQL directly.

Shortly we will see how this gives us a really easy time when dealing with the database because we are letting Spring Boot handle a lot of the heavy lifting when it comes to persistence.

However, I am somewhat uneasy about this as I already know I much prefer handling database stuff myself over an ORM. Right now I’m not worrying too much about this, but I think in the longer term I’m going to be moving away from JPA… but who knows.

postgresql

This is the PostgreSQL JDBC driver, which is necessary for the Spring Boot application to actually communicate with a PostgreSQL database. Think of it as the bridge between your Java application and the PostgreSQL database.

  • The driver converts Java calls into the corresponding SQL queries that PostgreSQL understands and executes.
  • The scope is set to runtime, meaning the driver is only required when the application is running, not at compile time.

In short, the first dependency allows Spring Boot to manage our data model and repositories through JPA, and the second ensures that our application can communicate with a PostgreSQL database using the JDBC protocol.

One way the developer experience could be massively improved—at least in my opinion—is on the Spring Initializr site. For example, when you select the Spring Data JPA dependency, it should prompt you to choose the database you’re going to connect to, like PostgreSQL, and automatically add, or very strongly suggest the appropriate JDBC driver.

At the moment, Spring Initializr doesn’t provide this level of guidance. It assumes that the user knows they need to add the correct JDBC driver separately. For beginners or even experienced developers who might not immediately think of this step, it feels like it adds unnecessary friction.

Postgres Datasource Connectivity Config

For some very unusual reason, I initially thought that by using the Docker Compose integration, Spring Boot would just magically know how to connect to my database. After a bit of thought and some head scratching, I realised this wasn’t the case—I still needed to provide my own data source configuration.

This isn’t necessarily obvious at first. While Spring Boot simplifies many things, connecting to a database still requires some manual setup. If you’ve read the Spring documentation and / or watched the Spring Academy videos, you’ll know you need to do this, but it’s still easy to miss for newcomers such as myself.

Here’s the configuration I ended up using:

# /src/main/resources/application.properties

spring.application.name=guest-book
spring.datasource.url=jdbc:postgresql://localhost:5655/app_db
spring.datasource.username=app_user
spring.datasource.password=password
spring.jpa.hibernate.ddl-auto=update
spring.jpa.show-sql=true
spring.jpa.properties.hibernate.format_sql=trueCode language: PHP (php)

The settings for spring.datasource... match the values I used when defining the Postgres service in the compose.yaml file earlier.

The JPA settings are a little more interesting, and one of them was really misleading.

These two:

spring.jpa.show-sql=true
spring.jpa.properties.hibernate.format_sql=trueCode language: JavaScript (javascript)

They are pretty obvious – once you know they exist.

The first makes sure you see the actual SQL statements being run by the application. This is important as JPA abstracts / hides away a lot of the heavy lifting, but in doing so could (potentially, I figure) be running some super inefficient queries.

Also, in doing this, it might help me spot things that need indexes? Maybe?

The other setting formats the logged out SQL, so it’s easier to read in the terminal.

This setting, however, is misleading:

spring.jpa.hibernate.ddl-auto=update

This property controls how Hibernate manages the database schema.

Setting it to update tells Hibernate to automatically adjust the database schema to match the entities in your project without dropping existing data.

This is sold as being useful in development because it allows your database to evolve as you modify your JPA entities, but that you shouldn’t enable it for production for … obvious reasons. Fine.

Only, it doesn’t actually update like I thought.

If I have this entity:

package uk.co.a6software.guest_book.infrastructure.entity;

import jakarta.persistence.*;
import uk.co.a6software.guest_book.core.model.Message;

import java.io.Serializable;
import java.time.LocalDateTime;

@Entity
public class MessageEntity implements Message, Serializable {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private long id;
    @Column(nullable = false)
    private String name;

    // ... other stuff
Code language: Java (java)

Let’s say I decide that hey, actually, the name field should be allowed to be null. And I change this annotation.

My assumption was this should then do some SQL magic to make this table change.

Nada.

The only way I could get this to work was by entirely dropping the table, then allowing hibernate to create the table again, this time giving me the nullable = true variant.

I figured it must be a mistake in my setup, and spent ages looking into the problem.

Turns out… nope. That’s behaving as intended.

And ultimately the solution is: use a proper migrations library.

Which I absolutely agree with … but that setting naming is sooooo misleading. I wasted ages on that, and finally found an answer on some random Google result 🤯

What Do I Need In The Service Layer To Make This Work?

Before we go any further, I think it’s worth taking a moment to revisit what the current service looks like. For this example, I’ve named it the Spring Message Service, and it implements the MessageService interface, which is defined inside the core package.

The reason for structuring it this way is to keep the domain logic decoupled from the framework-specific code. The core package contains the business logic, such as defining what a message is and how it should behave, while the service implementation in Spring handles the actual interaction with the infrastructure (such as databases or external APIs).

This separation of concerns helps maintain a clean architecture and makes your application more flexible and easier to test, as the core logic doesn’t directly depend on Spring or other frameworks.

package uk.co.a6software.guest_book.infrastructure.service;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.stereotype.Service;
import uk.co.a6software.guest_book.core.model.Message;
import uk.co.a6software.guest_book.core.service.MessageService;

import java.util.List;

@Service
@Qualifier("springMessageService")
public class SpringMessageService implements MessageService {
    private static final Logger logger = LoggerFactory.getLogger(SpringMessageService.class);

    private final MessageService messageService;

    public SpringMessageService(@Qualifier("coreMessageService") MessageService messageService) {
        this.messageService = messageService;
    }

    @Override
    public List<Message> getMessages() {
        List<Message> messages = this.messageService.getMessages();
        return messages;
    }

    @Override
    public void postMessage(Message message) {
        this.messageService.postMessage( message);
    }
}Code language: Java (java)

So that’s how it’s currently set up.

Now I’m wondering, what exactly do I need to do to modify this SpringMessageService to work with entities?

I know I need entities because I’ll be interacting with the database, and entities are how this whole process ties together.

But part of me is thinking: am I going to have to change everything about this service? And if I do, what was the point in creating it the way I did in the first place?

Another part of me is wondering, if I don’t completely overhaul it, am I just trying to shoehorn stuff in to keep the clean architecture approach? Is this really adding value to the project, or is it just making things feel more “professional,” for lack of a better word?

Defining The Message Entity

Before making any changes, let’s create the entity implementation of the Message interface:

package uk.co.a6software.guest_book.infrastructure.entity;

import jakarta.persistence.*;
import uk.co.a6software.guest_book.core.model.Message;

import java.io.Serializable;
import java.time.LocalDateTime;

@Entity
public class MessageEntity implements Message, Serializable {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private long id;
    @Column(nullable = false)
    private String name;
    @Column(nullable = false)
    private String message;
    private LocalDateTime time;

    protected MessageEntity() {
    }

    public MessageEntity(String name, String message, LocalDateTime time) {
        this.name = name;
        this.message = message;
        this.time = time;
    }

    @Override
    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    @Override
    public String getMessage() {
        return message;
    }

    public void setMessage(String message) {
        this.message = message;
    }

    @Override
    public LocalDateTime getTime() {
        return time;
    }

    public void setTime(LocalDateTime time) {
        this.time = time;
    }
}
Code language: Java (java)

The first thing that jumps out at me when I look at this entity implementation is that it only has three properties, yet it’s massive—around 55 lines. It feels incredibly noisy. There are setters, getters, two constructors, annotations, and all the imports. It just makes the whole thing feel bloated.

I started wondering if it would be possible to use a Java record here to cut down on some of this verbosity.

Unfortunately, the answer is no—at least not in a typical JPA use case.

While records are great for immutability and conciseness, they don’t play nicely with JPA because JPA expects entities to be mutable and have setters, which records don’t support. Plus, JPA requires a no-argument constructor, which records don’t provide by default.

This is yet another area where I think Java could definitely benefit from more compact getter-setter combos, similar to what you get in .NET. In .NET, the syntax is much more concise, and you don’t need all this boilerplate just to handle simple property access. I understand why Java has its current approach—it emphasizes explicitness, which can be great for readability and maintainability in large applications. But I also fully understand the arguments about Java feeling verbose, especially in scenarios like this one.

It’s one of those downsides I’ve heard others speak of when discussing Java. The language’s verbosity can be a bit much at times, particularly when you’re trying to write clean, simple code.

I Don’t Like My Entity Naming

The second thing I want to address about the entity is the naming. I ended up calling the class MessageEntity, and to be honest, I’m not a huge fan of that.

It reminds me of the naming convention in .NET where interface names are prefixed with an uppercase “I” (e.g., IMessage), which can feel a bit redundant.

In this case, I had to suffix my class with Entity because I’ve already used the name Message in my core domain model. And unfortunately, in Java, you can’t shorten or alias imports the way you might be able to in other languages, so you’re left with two options: either have a long implements or extends statement (e.g., core.Message vs. infrastructure.Message), or rename the class to avoid conflicts altogether.

Neither solution feels great—it’s a bit of a lose-lose situation. Suffixing the class name with Entity helps distinguish it from the core model, but it can feel awkward and repetitive. On the other hand, keeping the original name and fully qualifying the import makes the code harder to read. I opted for the former, but it still feels like a workaround more than a clean solution.

Another small annoyance when trying to the “right thing”?

Creating The Database Table From The Entity

With the entity defined, when we boot up the application—thanks to the settings we configured in the application.properties file—we should see Hibernate’s logging output showing us that the message entity table was created. This is a really nice feature because it confirms that Hibernate is working behind the scenes to set up our database schema based on the entity we’ve defined.

2024-09-18T17:02:29.615+01:00  INFO 1017074 --- [guest-book] [           main] o.h.e.t.j.p.i.JtaPlatformInitiator       : HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration)
Hibernate: 
    create table message_entity (
        id bigint generated by default as identity,
        message varchar(255) not null,
        name varchar(255) not null,
        time timestamp(6),
        primary key (id)
    )Code language: PHP (php)

However, as I talked about earlier, one important thing to point out—and something that caught me out at first—is that while Hibernate generates the table for us initially, it won’t automatically update the underlying SQL table structure if we later make changes to the entity. For example, if you add or modify fields in the entity, the database schema won’t reflect those changes. This can be a bit misleading if you’re expecting automatic updates.

To handle schema changes, you’ll need to either manually update the database schema or use a tool like Flyway or Liquibase to manage database migrations. These tools allow you to version control your database schema changes and ensure that the database stays in sync with your application’s entity definitions as they evolve.

You do need a pretty good understanding of SQL to use these tools. No bad thing, but again it’s a barrier for beginners.

Repository Interfaces: Rapid Application Development(?)

I think in software development, people tend to fall into one of two camps.

The first camp prefers to do a lot of pre-setup—boilerplate-heavy work—thinking ahead to when the project gets larger. They want to ensure the infrastructure is decoupled, scalable, and ready for future growth, even if that means doing extra work up front. This approach is about preparing for long-term success, even if it might feel slower and more tedious in the early stages.

Then there’s the other type of developer, who wants everything to be as quick and easy as possible. These developers focus on getting the code out the door as fast as they can. They might think, “Why bother over-engineering it now? If the app ever grows to a significant size, I’ll either be off the project, or, more realistically, it won’t ever get that big anyway.”

Surely by now, if you have read this far, you realise I fall into the first camp.

So behold, the repository interface:

package uk.co.a6software.guest_book.infrastructure.repository;

import org.springframework.data.jpa.repository.JpaRepository;
import uk.co.a6software.guest_book.infrastructure.entity.MessageEntity;

public interface MessageRepository extends JpaRepository<MessageEntity,Long> {
}Code language: Java (java)

You only need to define an interface—not even an implementation of that interface—and Spring Boot is able to infer the implementation that you need.

Out of the box, this gives you a bunch of explicitly methods that you can then use. I couldn’t find a full list on the docs, but you can use your IDE to take you to the JpaRepository interface definition to see them ‘at source’:

Spring JpaRepository interface methods

Beyond this, and where I start to get really uneasy about things, is how you can craft SQL queries by following a naming convention of methods on that resulting repository object.

Anyway, I’m a long way off worrying about things like that just now. The basic Intellisense methods were all I needed.

Back To The Service Layer

As we saw above, there are two methods in the SpringMessageService and both need changes in order to work with MessageRepository.

The easier of the two is the change to postMessage. Here’s the existing code:

    @Override
    public void postMessage(Message message) {
        this.messageService.postMessage( message);
    }Code language: Java (java)

And the change:

    @Override
    public void postMessage(Message message) {
        this.messageRepository.saveAndFlush(MessageTransformer.toMessageEntity(message));
        this.messageService.postMessage(message);
    }
Code language: Java (java)

OK, so a one line change, with a lot to unpack.

This service is going to be called by one of the controllers, e.g.

    @PostMapping("/post")
    public ResponseEntity<String> postMessage(@RequestBody MessageRequest messageRequest) {
        MessageImpl message = new MessageImpl(messageRequest.user(), messageRequest.message());
        messageService.postMessage( message);

        return ResponseEntity.ok(message.getMessage());
    }Code language: Java (java)

That’s illustrative.

The point is: this method receives some object that implements the Message interface.

However, the MessageRepository cannot work with something as loose as some object that implements Message interface.

It must work with entities.

Therefore we need a way to convert – or transform – objects to MessageEntity instances.

For this I’ve gone with a transformer implementation, specifying a way to convert as follows:

package uk.co.a6software.guest_book.infrastructure.transformer;

import uk.co.a6software.guest_book.core.model.Message;
import uk.co.a6software.guest_book.core.model.MessageImpl;
import uk.co.a6software.guest_book.infrastructure.entity.MessageEntity;

public class MessageTransformer {
    public static Message toMessage(MessageEntity messageEntity) {
        return new MessageImpl(messageEntity.getName(), messageEntity.getMessage(), messageEntity.getTime());
    }

    public static MessageEntity toMessageEntity(Message message) {
        return new MessageEntity(message.getName(), message.getMessage(), message.getTime());
    }
}Code language: Java (java)

This class provides static methods to convert between a single instance of MessageEntity and Message.

We will be working with List‘s of objects in the code, so will need a way to apply the transformation operation against each item in the List. More on that below.

Essentially, it’s acting as a utility to handle the conversion between our database representation (MessageEntity) and the core domain model (Message).

Looking back at the entity class, this explains why I needed the extra constructor:

why i needed the extra constructor

For absolute clarity, the code could be expanded:

   @Override
    public void postMessage(Message message) {
        MessageEntity messageEntity = MessageTransformer.toMessageEntity(message);
        this.messageRepository.saveAndFlush(messageEntity);
        this.messageService.postMessage(message);
    }
Code language: Java (java)

But I’ve opted to put everything in one line, and avoid the extra variable assignment.

The continued call to this.messageService.postMessage(message); is the one that leaves me with the unanswered question.

Do I still need this?

If I don’t… what’s the point in the separation?

But if I do, what is the separation really adding here?

I feel like I’m missing something obvious, so I’d appreciate any thoughts or feedback in the comments.

Retrieving Data From The Database

Following on from the question I have about the saving / postMessage method, the getMessages method makes the question resonate louder.

Here’s the changed code:

    @Override
    public List<Message> getMessages() {
        List<MessageEntity> messageEntities = this.messageRepository.findAll();
        List<Message> messages = messageEntities.stream().map(MessageTransformer::toMessage).toList();
        return messages;
    }Code language: Java (java)

The MessageService interface in the core package specifies this method should return List<Message>:

package uk.co.a6software.guest_book.core.service;

import uk.co.a6software.guest_book.core.model.Message;

import java.util.List;

public interface MessageService {
    void postMessage(Message message);

    List<Message> getMessages();
}Code language: Java (java)

Great.

We can adhere to this by getting all the entities from the database, and then using the Java approach to map to transform a List<MessageEntity> into a List<Message>.

It’s fairly long-winded, I find. Create a stream, then do the map operation, and finally collect to some data structure. Very explicit, very verbose… personally I just prefer the JavaScript way (.map), but I get why that’s not possible here.

And then we have a list of messages which ultimately conforms to the previously agreed interface.

But … well, there is no need for the MessageService defined in the core.

So… do I even need to bother working that service anymore?

Maybe not.

Maybe all I need in the core is the interface, and I don’t need an implementation there? Perhaps I should now delete that, as it was a hangover / artefact from an earlier iteration of the code. I’m honestly not sure, but writing this out, that’s where I’m now swaying towards.

Demo Time

Example Code

The code for this post is available as a branch on GitHub.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.