Categories
Database Updates

Backend Services for Notic Meet

Last week I wrote about the database that Notic Meet would use, that being NoSQL document storage using MongoDB.

With V1 of the DB design created, it’s time to work on the initial backend services for Notic Meet. Rather than build all of them before building the front end, we will be building in slices, with the front and back ends being built together for each feature.

Before working on those vertical slices, we are currently working on getting the backend to a point where we can add endpoints easily and getting the server project to a place where new models and endpoints can be easily added.

The app uses .NET and C#, with the project being set up with the Blazor WebAssembly App template with .NET Hosted selected.

This template provided a good enough start and created three projects within the solution. Those being:

  • Client – Which is used for the frontend
  • Server – Home to the API controllers as well as services that those controllers access
  • Shared – Which, in our case, will be used to store some frontend logic as well as connect the frontend to the backend so that we don’t need to deal with HTTP requests in the client project

On the front end, I switched to using MudBlazor, a component library I have used for the past year since moving to use C#.

To demonstrate briefly how the communications work from front end to back end, it begins by setting up dependency injection in program.cs in the client project.

builder.Services.AddTransient<IClientUserService, ClientUserService>();
builder.Services.AddTransient<IClientMeetingService, ClientMeetingService>();

This shows the ClientUserService and ClientMeetingService being injected. The former will be the place for all interactions and logic related to users, and the latter will be for meetings. Several other client services will be added, but these two are sufficient to get the backend to a place where it can be used. AddTransient means that a new instance of each class mentioned will be created each time the service is requested.

To use the client services, they get injected to the .razor pages as follows:

@inject IClientUserService ClientUserService

In the code section of each razor page we can now access endpoints such as:

var user = await ClientUserService.UpdateUser(updateUserDto);

In the implementation of ClientUserService it follows this pattern to reach out to the API endpoint in the server:

public Task<LoginResult> UpdateUser(UserDto data) => Post<LoginResult>("user/updateuser", data);

This makes a POST request to the server calling the user/updateuser endpoint and passing in whatever the UserDto requires.

In the UserController file in the server project we set up a HttpPost:

[HttpPost("updateuser")]
public async Task<IActionResult> UpdateUser(UpdateUserDto data)
    {
        if (string.IsNullOrEmpty(data.UserId))
        {
            return BadRequest();
        }

        var result = await _userService.UpdateUser(data);
        return Ok(result);
    }

In the controller above, we have defined a HttpPost endpoint called updateuser that accepts the UpdateUserDto model. We then check that it contains a userId; if so, we call the user service updateUser method.

So far, we have called the ClientUserService, which calls an endpoint defined in a controller on the server, which then calls the UserService where all of the logic to update the user will be held.

I won’t carry on at this point with how the data is fetched from MongoDb as we only have a prototype of that at the moment, but a brief explanation is as follows:

The UserService accesses the data context which will be something like this:

dataContext.Users.Update(data);

If you wanted to update a meeting instead, that would typically go through the ClientMeetingService > Meeting Controller > MeetingService and then to dataContext.Meetings.Update(data);

By working with the data context, we use abstraction to separate the logic of how data is fetched. Currently, the data context connects to a MongoDataRepository class that interacts directly with our MongoDB. Still, given that we have an interface MongoDataRepository uses, we could easily switch to CosmosDataRepository if we decide to use a different provider. We would need to migrate any data once the web app has gone live, but the option is there should we switch. Abstracting the logic behind the data context means that the UserService, MeetingService, and any other service doesn’t need to worry about how it works and doesn’t care where the data comes from.

We are about 70% done with the data context, about 70% with the MongoDataRepository, and zero per cent with Redis caching. For that part, the MongoDataRepository will likely implement an ICaching interface and have some logic written to decide what needs to be cached, how long, when it needs to be flushed/fetched, etc.

Let us know in the comments if you have any questions.

Categories
Database

Designing the Datastore for Notic

One of the early decisions in making Notic Meet was choosing how data would be stored. Traditionally I have opted for MySQL as I have used this for many years, but in the last couple of years I have worked with NoSQL databases, particularly document based storage.

I have grown to really like this approach and because of this, I decided to use it for Notic Meet.

The database of choice is MongoDb. I don’t have experience using it, but do have experience with Cosmos which has some similarities with it.

After deciding the basic feature set for MVP, and after choosing MongoDb, I met with my team member at Notic and we designed what we could of the storage. In short, some of the items we need to store include meetings, users, notes, threads, plus a few more things. I’ll share a more technical post showing the design at a later date.

Our design considers a number of factors which include user stories and how a user will interact with the data, such as what does a user need, what is the best way to get that data, and so on.

As mentioned, we will be using document storage for this project. We will be duplicating some data across documents to speed up access. This will be meta data rather than the full data, meaning that we could potentially get all of the users meetings by just looking at the user model, but as a meeting is opened, it can then grab the contents of the meeting, if needed.

Partitioning will also be used for some types of data to help speed up retrieval.

We both feel that the design is good to go, although we may make minor alterations as needed throughout building the MVP.

We will post regular updates here. Please feel free to comment, ask questions, make suggestions, and sign up for our emailing list if you prefer to receive updates that way.