EVNotify has a quite useful and powerful feature, users are using daily. Logs. With logs, all your charging sessions and optionally even your drives, you monitor with EVNotify, are summarized for subsequent inspection.
All the data, you are collecting during your session, such as battery temperature, the current speed you are currently charging, the state of charge, and many more data, are then visible and rendered in a chart. If you have location syncronization enabled, you can even see a map, where you drove.
To be able to provide this functionality, the server needs to process your data, after your session ended.
This is done in a quite complex way, because the data is highly dynamic and is not pushed in a steadily period. It differs for several reasons, such as bad client internet connection, local binary data reading and analyzation from the car, or even the server itself, when it’s overloaded. This is the reason, why the data will then be summarized afterwards on the backend.
Usually, it takes about 5-10 minutes after your session ended, until the log is visible within the app or the brand new web interface.
Now comes the interesting part. Why does it take that long? Well. On the one hand, a delay of 2-3 minutes is normal and wanted. Because, as mentioned earlier, you have to keep in mind, that connection can be lost for a few seconds or a minute. With a grace time of 2-3 minutes of “no data is coming anymore”, you ensure, that the session has really ended. On the other hand all the submitted data needs to be processed. And this is a really cpu-intensive thing. Because it needs to retrieve all the data has been submitted since the latest log, that has been stored on the database. This can be a lot of data. Actually, it’s up to several thousands every few minutes, when a new log is generated. Every sync will be stored, so every data, like the current state of charge will be transmitted, so you can afterwards see a chart with the history, how it evolved. But, and that is producing the most traffic, also location data is included. When you drive your trip with EVNotify and location syncronization turned on, you are submitting location data every second or even multiple times per seconds, based on the precision and the current speed. Imagine, how much data will be sent to the server, when hundreds of users are using EVNotify on a daily basis.
So all of these data are logically combined and then merged and summarized to a log entry. This log entry just defines, which user when drove or charge at which time.
You can not really avoid the huge calculation that is required to logically summarize the drive or charge. And even the huge amounts of data that needs to be processed. Because this is a very cool feature, that will be even more extended in the future to also be able to track consumption and the amount of energy you have charged (new post will follow soon).
But the problem that occured is, that during the existence of EVNotify v2, that introduced this log feature, all data are stored there. Over time this amount of submitted data increased exponentially. Currently more than 50 million data has been submitted at all, which is so amazing and unbelievable!
But now you get the problem. It’s a lot of data. Like, really, a lot. You can imagine the database power, that is required, to prefetch and query all these data to then select only the required rows, that needs to be processed the next.
This is not the only problem. Because, if the generation takes 5 minutes or 6 minutes, it is not a big deal for the user. But if you want to open your log and you need to wait like 20 seconds instead of 1 or 2, this is really frustrating and not a good user experience. Especially for drives, the loading time was not good in the last few days and weeks.
So I needed to really think about, how I can improve this. A few weeks ago, I upgraded the server hardware to provide more power. This, of course, helped a little bit, but it’s not solving the problem and just delays the real problem.
I came up with a new idea, that helps a bit. Migration of the logs. Instead of storing every “statistics” / history data (data you submit) in a single, huge table, it will be splitted monthly in own, dedicated ones. With that, the amount of data to load and process is divided by 12. Having less data, optimizes database operations a lot, so the real effect and performance boots is even more.
On the weekend I thought a lot about how I can achieve this. Especially during the live execution with no downtime. Users should be able to still use EVNotify. They should not even notice, that their logs are processed and migrated in the background.
This is a real challenge, especially for the server of EVNotify. It’s not the biggest one, EVNotify is an Open-Source-project, that I build and develop for free, so I don’t have a fixed or reliable income from EVNotify. I rely on donations and my own pocket bag ;-)
So, the current solution is, to retrieve existing logs, move the associated data from the “big table” to the corresponding “monthly table”. In future, it may be changed to weekly, but this needs to be done later. The migration takes a lot of time and is really, really, computationally intensive.
If you are interested in the code, you can look at the commit.
Currently, about 33,51% logs are already migrated. Several thousands are still migrating. The migration started about two days ago, so the migration itself will take at least two or three more days, until every log is migrated.
At the end, to give you an example of the performance boost of a driving log of 23 minutes. Before the migration, it took about 5.6 seconds until the data has been retrieved. Now, after the migration, the time decreased to less then 200ms.
I hope you liked this very first blog post, especially the insight of the EVNotify migration. I will continue writing more blogs to give you updates about what I’m doing and what I’m currently working on.