avatar

Lukas Rychtecky

Posted on 26th November 2019

Self-Service for Online Payment Platform in Zimpler

news-paper Case studies |

This article describes how we helped Zimpler to design, implement and maintain Self-service for Online Payment Platform. The platform allows a customer to make instant payments. A customer can deposit or withdraw money in real time in such a way that there is no need to wait for a real bank transaction.

Woman and graph

In the following paragraphs we describe how we work. The main challenge we faced here was to add a new administration dashboard to an existing complex system, integrate it to the existing build & deployment platform. The dashboard is a UI where a customer can see or maintain their transactions. The structure of this article goes through defined goals, our suggested solution, and technical decisions and architecture.

Abstract

Self-care is an administration dashboard for end users of the platform. A user can update their preferences, see and maintain transactions and setup notifications.

This case study describes a process of adding new self-care into an existing payment platform. The platform is based on micro services architecture (this was the state of the project when we got it, so this article doesn’t describe pros and cons of a micro services architecture). There are other teams working on the system so the process of adding new self-care cannot block or affect any other parts of the platform nor their delivery. Every work or delivery must be independent of the work of other teams. This condition is a business rule that allows faster and independent reactions on market needs.

The domain goals:

  1. Offer a dashboard for user actions requiring authenticated end-user, that are not part of other existing flows
  2. Allow new user stories where part of the user interaction was done on merchant sites and elsewhere
  3. Integrate this dashboard into the online payment site

The technical goals:

  1. Create a frontend with a bespoke backend
  2. Have bespoke backend serve static content as well as API that exposes existing internal services
  3. Don’t block or affect any other part of the platform.
  4. Integrate self-care into an existing deployment pipeline.

Solution

We as Flexiana are working on this robust platform for over a year. Since now we have been maintaining and adding new functionalities to the platform, together with other teams.

The solution uses a combination of the existing technology stack used by the client, with some improvements. The current platform’s architecture is well known to us.

CI/CD was done using AWS services (CodePipeline, ECS). Most data was provided by existing services deployed to the same private network in the cloud and not publicly reachable.

The Project builds into a single fat JAR which holds both the backend code and the compiled frontend ClojureScript code. This makes deployment easier and it also works favourably with several browser security measures such as CORS, since the frontend is calling the API on the same domain the page is loaded from.

Architecture

The self-care was split into two main components; an API and user interface. Both components are packaged into one JAR file. The figure below describes a communication with other services. All services were run in Virtual private network which provided a solid security and there were no authentication between services.

An Authentication service was used to verify a user’s identity. This service was used in other parts of the platform, e.g. in the Payment gateway. Thus there was no difference for a user when logging into the self-care or using the Payment gateway.

The service held a state in a session. If the session would be kept in memory, it would be removed every time a new version was deployed. Because we used Continuous Delivery (see more in Delivery pipeline section) every service was behind a load balancer and multiple instances of one service were running. Keeping the session in memory would lead to inconsistent behaviour and losing the session when a new version was deployed. The session was moved out of the service and kept in session storage. As session storage, Redis was chosen, because it was used in other parts of the system.

Architecture scheme

The Twelve-factor app

The Twelve-factor app is a methodology for building software-as-a-service apps. This methodology helps with things like:

  • Code base structure
  • Dependencies
  • Configurations
  • Concurrency (shared state)
  • Environments
  • Logs
  • And others

The Twelve-factor app methodology is a so-called industry standard and best practice. It creates an interface for infrastructure so it allows not to differentiate between technologies (like operating systems, programming languages, etc.). All services in the system are configured, deployed, monitored in the same way no matter what internal technologies are used.

Environments

Every application needs more than just a production environment. A programmer needs an environment, QA needs somewhere to check the quality of the product, and of course the real production environment.

Because we’re adding a new service into the existing system the environments have been already given.

The environments were:

  • A localhost – for programmers
  • Staging – an integration environment with other services and no real data
  • Sandbox – a playground environment for merchants (a production like)
  • Production – with real data

Thanks to following The Twelve-factor apps there were no changes (in the service itself) needed to prepare the service for any environment.

Delivery pipeline

A whole delivery pipeline was automated for the staging environment. The pipeline started whenever a new piece of code (pull request) was merged. When tests passed, a Docker image build process started and a created image was pushed to Docker Hub. A reason for pushing the image to Docker Hub was to be able to deploy every version of a service. This was helpful when we needed to debug or find bugs in a particular version. Pushing the image to AWS S3 bucket allowed us to use AWS CodePipeline to finish our deployment process.

Delivering code to production was semi automated. A trigger wasn’t connected to the Version Control System, but it was the only manual step (company’s internal policy). Every programmer could deploy new version via AWS CLI tools. When the change was tested on the staging environment a programmer used AWS CLI to start a CodePipeline process. This process picked the current Docker image from AWS S3 bucket and deployed it to both environments (sandbox and production) at once.

Technologies

The system contained many services that are written in a few languages (for historical reasons). Most of the services are written in Clojure and as we (as a team from Flexiana working on this system) are experts in Clojure the decision was taken to stick with it. The backend and frontend were built by the same team, so we decided to stick with the same language and ClojureScript was chosen for the frontend.

We have already done a few frontends with ClojureScript with well-known libraries. The main libraries that were chosen as follows:

  • Reagent for UI layer. It’s a tiny wrapper around ReactJS.
  • Re-frame for frontend architecture. It defines a common way to handle events and side effects in the application. Generally it’s the same as Flux in Javascript.

As professionals we know that quality of a code matters not only at the time of development, but mostly when doing maintenance. To keep the quality of the code we used extreme programming, pair programming, code reviews and other methodologies. As automated quality checkers we used the following tools:

  • Cloverage a code coverage tool that shows what code is not covered in tests.
  • Kibit a code linter tool that checks  if an expression of piece of code could be simplified/replaced by Clojure core function.
  • Eastwood a code linter that checks code for suspicious expressions.

As a session storage Redis was chosen for two main reasons:

  1. We have a good experience with this in-memory key-value storage
  2. To keep the technology stack consistent and small as possible (Redis is used in other services)

Next steps

The delivery pipeline described in previous chapters is fine for a short time to market but of course it still can be improved. For the client it is important to have a delivery time as small as possible. So our next step could be to split the delivery pipeline for backend and frontend. The frontend can be built and deployed separately, directly to an AWS S3 bucket. AWS S3 is a key-value storage that allows fast serving of a static content. By splitting the delivery pipeline the time taken to build and test the backend could be cut-off.

Conclusion

Our goals were to deliver an MVP of Self-care application with a short time to market. We knew that the application would be here for a long time so our architecture was focused on preparing a solid and stable code for ongoing improvements.

The delivery pipeline was designed for a short time to market when adding new features. This approach allowed us to do small many deploys during the day without an outage. The application was delivered to production within weeks.

The customer was satisfied not only with the quality of the delivery but especially with the whole process. So we currently continue on new features for the self-care and process end-users feedback. The new features are integrations with other banks and also UI improvements.