Backend frameworks are a big deal when it comes to seamless web development. Choosing the right technology influences the overall work process, application’s performance and scalability. It may not be easy to decide when there are so many options around, but we’ll happily save you from all the hassle, presenting Top 10 backend frameworks worth taking into consideration.
1. .NET 5
.NET 5 is the next major release of .NET Core by Microsoft – a free, open-source, cross-platform framework. The naming has been changed to emphasize that now this is the main implementation of .NET. This new platform is a unified successor of various .NET flavors, including .NET Framework, .NET Standard, and .NET Core. One of the key ideas behind .NET 5 is that no matter what application you are building, your code and project files will look and feel the same.
.NET 5 is a perfect choice when there is a need for high performance and scalability. According to TechEmpower benchmarks, .NET is among the top-performing web frameworks, making .NET 5 and ASP.NET Core the best options to consider. Their performance and scalability are vital especially when microservices architectures are concerned, with large numbers of microservices running. ASP.NET Core systems can run on significantly fewer servers or virtual machines, which reduces infrastructure and hosting costs.
.NET is excellent for developing business applications, both by startups, and enterprises. It comes with a diverse set of tools that facilitate building enterprise-grade software of various complexity and with a great variety of features. It uses object-oriented programming languages, especially C# which is clear and intuitive, and that has a positive impact on application maintainability and makes troubleshooting easier. Parts of code may easily be reused to build applications faster. What’s more, there are lots of external, open-source packages available, which provide ready-to-use functionalities that your team members don’t have to develop and maintain by themselves. It reduces time and costs, and in certain cases may also remove the necessity to hire experts in specific areas.
.NET fits well in the cloud environment. With its set of tools and libraries, ASP.NET Core enables you to build web applications that you can host on all major cloud platforms. However, since Azure is a cloud platform provided by Microsoft, it may be a natural choice for a .NET team. Many Azure products run .NET natively and are integrated with Visual Studio and other IDEs like JetBrains Rider or Visual Studio Code. With all the available debugging, publishing, and CI/CD tools, the product development cycle becomes effective and seamless. What’s also compelling, .NET solutions may easily be built into Docker Containers to make cloud deployment even more efficient.
2. Entity Framework Core
Entity Framework Core is an open-source and cross-platform data access technology. It serves as an object-relational mapper (ORM), thanks to which developers can work with various databases, usually without writing any SQL commands. There is no need to write the extra code that is typically necessary when mapping query results to object members or the other way round, so it lets you deliver functionalities in less time. Also, it provides an abstraction layer over a database, so when there is a need, changing the database provider or supporting multiple providers is relatively easy.
What’s more, Entity Framework supports migrations. Now and then, database schema may change, so there usually has to be some versioning mechanism to ensure, for instance, that the structure of database tables is right. Otherwise, commands executed by the application could fail when accessing data. The migrations feature in Entity Framework Core supports incremental database schema updates to keep it up to date with the application’s data model while preserving existing data.
Currently, there is support for over twenty database providers, including MS SQL, PostgreSQL, and MySQL, for instance. Even though we use Dapper or RepoDB sometimes, we believe that Entity Framework Core is the best choice.
Blazor employs WebAssembly on the client-side, so your C# code is executed directly in the browser. And because real .NET is running on WebAssembly, you can reuse any code and libraries from server-side parts of your application. Alternatively, the client logic can be run on the server by sending UI events to the server to be handled there, eventually causing UI changes to be sent back and merged into the DOM.
What’s convenient about choosing this framework is that by using the same language for both the frontend and the backend code, you can avoid duplicating effort by reusing the same libraries and business logic. Also, it gives you the ability to build apps faster with smaller teams. It is easier for developers to switch to the frontend context since they stay with the same language.
4. GraphQL / HotChocolate
In recent years, REST has become the norm when it comes to web APIs. It uses the HTTP protocol to perform various operations, each of which is marked with a so-called verb (POST, GET, PUT, DELETE, etc.) to indicate the type of operation to be performed. This approach, however, has some downsides. For example, the client is not typically able to decide what range of data to fetch, which leads to transferring extra information that won’t be used. Also, when multiple dependent resources need to be fetched, it may end in sending multiple requests to the API.
GraphQL, as an alternative, is an open-source data query and manipulation language to be used for APIs. By formulating a query, a developer can retrieve just the necessary data, nothing else. This addresses the over-fetching problem. On the other hand, when data has to be gathered from multiple sources, it’s the backend’s responsibility, so the client performs only one request and gets all they need at once. The backend can determine by the query, whether a specific range of data is requested, and it won’t be retrieving it if not necessary, to optimize the performance.
What’s more, GraphQL also provides a schema, which is a kind of “documentation” of the API. The schema provides all necessary information to make a request, including fields and their types. By strong typing, you have explicit control over data format both when formulating a request and when retrieving a response.
HotChocolate is a .NET implementation of GraphQL server, actively developed by the ChilliCream community. It is really easy to set up, intuitive to use and does not require to write schemas manually. What’s more, the team provides you with all tools you may find useful when developing a GraphQL API, including an IDE to explore schemas, execute requests and get performance insights.
Even though HotChocolate is only a few years old, it has become very popular in recent years, and we believe that in the future it will become the de-facto standard. We keep our fingers crossed for that and collaborate with their large community.
Remote Procedure Call is a commonly known technique of executing remote code by a local procedure call. Commonly, calling a local method executes code on a different computer. Client-server communication is involved behind the scenes to perform a request and receive a response, but from the programmer’s point of view, the execution of the method is no different from executing a ‘local’ one.
This approach gives various possibilities and advantages in distributed systems. Think for instance about internal communication between microservices. In certain cases, an external request to your API will require you to gather some information from other microservices. And here you may go for a standard API call, but it will probably cost you some extra time to implement the logic for formulating and sending a request, probably mapping the response to an object, and handling possible errors. What if such code could be generated automatically for you so all you need to do is call a method?
Here comes the open-source gRPC system. By using an interface definition language you specify the data structures and methods with their parameters and return types to be exposed to clients. The actual code in your preferred programming language is generated automatically both server- and client-side. On the server-side, you need to provide implementations for the methods you previously declared. On the client-side, on the other hand, the autogenerated code will provide you with the same methods as the server exposes, so most of what you need to do then comes down to just calling one of them.
So we’re saying goodbye to the REST API for microservice communication and welcome gRPC!
Akka.NET is an open-source framework for distributed computing, developed by Petabridge. It lets you create scalable, concurrent, message-driven applications.
You might have heard already of the actor model programming paradigm. The basic unit of execution in this approach is an actor. Essentially, when it receives a message, it processes it directly or passes other messages to another actor to do the job. An interesting aspect of this approach is that it doesn’t matter if the actor is running locally or in another node of the actor system. This enables you to create resilient and efficient systems and to scale them both vertically and horizontally.
The actor model perfectly fits into the message-driven development approach. It’s a good choice for both monolithic and microservice-based architectures. It may as well be used for implementing the CQRS pattern. The world of actors nicely separates responsibilities to make application code and architecture clean and easy to maintain and scale.
Observability has three pillars: logs, metrics, and tracing. It is a vital aspect of any application as we usually need to monitor its behavior, performance, and health and investigate the root causes of errors. The goal is to enable people involved in product development to understand better and fix problems efficiently and reduce their possible negative impact on the business.
The open-source OpenTelemetry solution provides a single standard and a set of technologies to instrument, generate, collect, export metrics, traces, and logs from applications and infrastructure (logs will be supported soon). It is an abstraction layer over providers, so it does not tie you strictly to a specific one. Also, what’s worth mentioning is that industry leaders in the observability area support it.
If possible, we also use dedicated cloud solutions like Application Insights.
Our solutions are usually developed in the so-called modular monolith architecture. It’s typically an optimal starting point for building a web application that may still evolve towards microservices. However, before getting to that point, we start by identifying domain-model boundaries to create bounded contexts isolated from each other.
On top of that, there is an API that needs to consume them. Here you can inject a bunch of services to your controllers or choose a cleaner approach driven by messages. The latter one employs the mediator pattern to decouple the code by creating separate requests with associated properties to be processed by associated request handlers implemented in particular bounded contexts.
And here comes MediatR, a library by Jimmy Bogard — a simple implementation of the mentioned mediator pattern. You can imagine it as an in-memory message bus. It uses dependency injection under the hood to resolve handlers for individual messages posted on the bus (commands, queries, events). Using it instead of injecting multiple services to a controller makes the controller thinner and easier to maintain.
This solution is valid only for internal communication. However, when MediatR is hidden under an abstraction layer, it may easily be replaced with a message bus in the future when bounded contexts of the modular monolith become independent microservices.
Artificial intelligence is constantly gaining popularity in various sectors. By definition, AI lets computers make decisions that usually require human expertise. On the other hand, Machine Learning (ML), as a subset of artificial intelligence, allows computer programs to automatically improve based on common patterns found in data through examination and comparison. For example, let’s think of music streaming services that, based on your liked songs, are able to provide you with suggestions of other songs that you may find enjoyable.
ML.NET is an open-source, cross-platform framework for .NET. It lets you add machine learning to your software to make automatic predictions based on the data your application processes. These include price prediction, product recommendation, sales forecasting, image classification, and many more. It lets you train a machine learning model or reuse an existing one by a third party. This means that you don’t have to have a background in Data Science to make use of the library.
To a large extent, modern applications are driven by changes happening in the application or external systems. They might occur in an unpredictable order or even concurrently. Typically, we handle these changes (events) independently, even though they may refer to the same data. As long as the pieces of code responsible for handling individual events are isolated, coordinating the events may become difficult. The code may be hard to maintain and prone to errors.
Reactive programming addresses that by providing an abstraction layer for events and states that change over time. You react to events by forming chains of execution or event streams that you access as though they were collections. It’s essentially what the Reactive Extensions (Rx) library lets you achieve with ease while giving you the convenience of using LINQ. Rx may be a natural choice for dealing with sequences of events.
Apart from all those, we have two other pearls in our arsenal: the modular monolith and microservices architectures, but they will be covered in a separate article. Stay tuned!