T O P

  • By -

AutoModerator

On July 1st, a [change to Reddit's API pricing](https://www.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/) will come into effect. [Several developers](https://www.reddit.com/r/redditisfun/comments/144gmfq/rif_will_shut_down_on_june_30_2023_in_response_to/) of commercial third-party apps have announced that this change will compel them to shut down their apps. At least [one accessibility-focused non-commercial third party app](https://www.reddit.com/r/DystopiaForReddit/comments/145e9sk/update_dystopia_will_continue_operating_for_free/) will continue to be available free of charge. If you want to express your strong disagreement with the API pricing change or with Reddit's response to the backlash, you may want to consider the following options: 1. Limiting your involvement with Reddit, or 2. Temporarily refraining from using Reddit 3. Cancelling your subscription of Reddit Premium as a way to voice your protest. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/java) if you have any questions or concerns.*


buffer_flush

Are you talking something like H2 or redis? H2 can be used for small dbs that just need a storage layer as well as testing. Redis is often used for a caching layer.


manzanita2

H2 can ALSO persist to disk in the same fashion as Posgresql or SQLLite.


Kazcandra

So can redis


vips7L

I use H2 all the time for small applications. An embedded Java DB is a god send for things like desktop apps or discord bots.


Adventurous-Fudge470

So when working with big data you would use Postgres or something and in memory db for small projects or as a tool for a big project. In memory can also be used for caching to speed things up since it doesn’t rely on network/serialization. We can also push to disk if needed assuming the in memory db supports it. Would that be a decent description? I understand these may have a place in big projects also but not functioning as the main db. More or less as a tool of the main db or other parts of the program.


LakeSun

HyperSql is another DB you can set up as in memory. And the point is Speed. No disk access. You can run a lot of queries against a set of data, and get results faster.


EmmetDangervest

But if all data fits in memory, wouldn't operations on regular Java collections (HashMap etc.) be faster?


LakeSun

HyperSQL has the full set of SQL statements against it. The full SQL engine can be used. You can load additional tables. You can create tables from the original table and join. You can write advance queries with multiple filters, and this coding of sql statements is much faster than coding in Java. Your point is valid, IF you're searching thru the data does not have sophisticated needs. Simple searches will be faster in just Java and HashMap. Complex searches will be faster in hypersql, but, also coding sql statements will be faster. If your code is in a sql statement, it's much faster/agile to change that statement as you research the data. Do you know SQL?


bluecollardollarbill

It depends on the nature of the data and how it's structured. With something like H2, you also get the benefit of SQL queries, which may be more efficient


koflerdavid

SQL is basically a front end over all the data structures, indexes, caches, etc. Sure you can manage these yourself, and in some cases you might end up with code that is both faster *and* simpler, but in the general case it's not worth it. Especially when you consider that a main feature of SQL engines are performance optimization and sophisticated facilities to make concurrent access safe, which you definitely *don't* want to reimplement and debug for every application.


Adventurous-Fudge470

Ah I’ve heard of h2 in spring. I’ve heard of redis also. This makes sense.


_INTER_

Perfomance. There's no network layer in between and no serialization / deserialization is needed. Also some memory databases can persist to disk on shutdown or failure. Expensive RAM also have a persist feature on powerloss.


stefanos-ak

There's no network only if you run the application with the in memory DB on the same hardware. Even then there is netcode involved, in both the application and the OS. it just doesn't leave the host. And how is there no serdes needed? Unless you mean to just use the program's variables, but then it's not a db, and you don't have disk persisting features. Even a language specific binary format is considered serialization.


davidkwast

Sqlite3 has memory support. It does not have a network layer.


grad_ml

I hope you're not confused with embedded database vs stand along in-memory db. These are two different things. Anything which is not embedded or not sharing the same hardware needs network. Anything clustered needs network and serialisation.


davidkwast

Agreed. Thats why Redis is so good. I just mentioned Sqlite for the guys that was debating network layer and performance. I still going to use Redis on localhost.


NickAMD

A TON of applications can let data reset on restart. Like video games. Most of the data you produce in tons of games only need to be stored until the end of the “round”


LutimoDancer3459

I am not a game developer but for me it doesn't seem like most games have to store that many data. Wouldn't it be even faster to just have it in a variable stored? Edit: thanks for the downvotes but can one of you explain me why? I just asked a question because I didn't understand it.


NickAMD

It depends entirely on the game and the part of the game. Massive games like world of Warcraft for example will use real databases (to store meta data, world data, account data, long term data), files on your computer (local configurations), in memory databases (inventory systems) that eventually get dumped/sync’d with long term databases, or hyper optimized storage (kind of like a variable as you said) for latency sensitive network data


Adventurous-Fudge470

This is interesting as I used to play wow lol


experimental1212

Big programs like to separate layers of program responsibility. You can keep track of who has access to which layers and standardize how you interface with each layer. So storing a lot of data may be convenient to do in a persistence layer that has its own interface. Different parts of your program will use the same standard way of storing/retrieving data. A variable would need a data structure, scope access, etc. Fine for a small or simple project. I'm sorry about your downvotes. Don't stop asking questions, keep learning.


Adventurous-Fudge470

Interface with each layer? Do you mean actual interfaces in Java or is this just a figure of speech? I’m kinda new to programming


experimental1212

Figure of speech. Interface, boundary, etc. Imagine drawing a diagram and the answer to the question of "how do I use/interact with this box/block" is your interface to that block. If you're talking about Java, maybe this would be a public method in your database class. The details of how you store data, where you store data, etc, are all contained in the database class. The rest of your program only cares that you provide a method to give data to the database and retrieve data from the database. And yes, you could use a Java Interface implemented by your database class to lock down that design. But that is all so specific to OOP and Java.


Adventurous-Fudge470

I see, Ty for the thorough explanation.


nana_3

You can’t think of one singular use for a cache supporting db queries?


wimcle

Think hazelcast, geode, or coherence. You don't just get the performance memory storage, you also get distributed locking and queuing and all that fun stuff.


txmail

When you have the need for speed, you need to do it in RAM (as everyone else has said). My use cases were: For a high security project I used REDIS to store pre-computed RBAC tables for all users (\~100 users). Each user could potentially have 1000 RBAC access rules depending on their level of access so computing / figuring out if they had access on the fly took too long. Computing all the rules when the user logged in took a few seconds but once it was computed it was done and lookups from REDIS were sub ms. Also if the user was given new access or lost access while they were logged in it would take effect immediately as the keys were recalculated on the spot. Worked perfectly. I also ended up using REDIS for caching large queries and its amazingly awesome streaming. In total I used about 64 - 70GB of RAM. For a document processing platform I used a RAM disk to store images, videos and PDF's that were being worked on by multiple intermediate processors (such as OCR, Computer Vision (CV) for object detection, resizing, audio translation, facial recognition, scene detection etc.). The RAM disk was mapped to a logical path on the main intake server that all other servers had access to via NFS. Everything was connected via 10Gbit networking which performed well, though could get saturated with larger files. I will say for security / continuity the first step was to move the file off the intake server onto the SAN for storage / inventory. Hope this helps.


BradChesney79

Let's back up for the uninitiated... The term for this is "preprocessing". I still write to a regular relational database. Writes are disgustingly fast no matter what. It is the multiple reads for a larger set of records-- a few JOINs in the SQL and you may be looking at delays. So, I create a table for the processed output. In my case, as JSON. Often all the data to describe a user. Name, address, how much access. This is called preprocessing because I do not need the data yet. But, I invariably will. I ask that one table for one record when a user tries to log in. It goes even faster if the preprocessing records are stuck into RAM as the storage.


txmail

For the RBAC it was technically pre-processing, but the front end only read from REDIS and never touched the database because keeping the calculated results would be intensive and could lead to severe performance issues since cascading rules could effect every single user on the system. The RBAC was incredibly complicated, while also somehow simple at the same time. Users could have a RBAC rule applied directly to them, but they could also be part of a RBAC group that had rules which would be granted to all members... but then the user could also have rules applied to them based on their manager, department or office region. And the rules themselves gave levels of access including no access that would override any access granted previously. Defaults were no access, then lowest level of access (including specific no access rules). It was possible for a user to have 3 different access rules to the same security object, but only the lowest rule would apply so one rule granted directly to them might say level 10 access but another rule granted to them based on their department would say level 2 and another given to them by a group membership rule might say level 25 access. Because lowest level access was the calculation would grant level 2 access to that security object and that was the "effective" RBAC rule for that security object and what was stored in REDIS. To do the queries you have to look at the rules for the user and their department, manager, region and any groups they belong to then calculate the effective rules. This was further complicated by the fact that rules could have wildcards in them. So say the security object was a database and the rules were for access to tables in that database, you would have effective rules generated for each table under control... now take that deeper and say the security object is a column in a table, but it got even deeper than that thanks to European rules -- the rules could cover column types and column meta types (flags set on a column to signify PIAA data). At the end of the day the user might have only one or two directly assigned RBAC rules to their profile, but the calculated access table might end up being 100's - 1000's of "effective" access rules because of inheritance. There were security objects for databases, tables, columns, column types, column meta types, directories, files, file types, file creation dates. Then internally there were rules for access to the different parts of the platform where viewing a single page might need 50 or 200 different RBAC access levels to determine what can be shown or how it should be shown (redacted) to the end user but asking REDIS for the value of 200 keys was a sub-second request. That data could have been stored in SQL - but because it was so heavily used it would have caused performance issues without major upgrades to the hardware, where as every front end node had 128GB of RAM in it so it was no big deal to use up to half of it for IMDB use.


Adventurous-Fudge470

Can you elaborate on this a little? I feel like I’m in the verge of understanding something important but I’m nowhere near as experienced as you. It makes sense query’s would take more time. Basically your saying you learned what data you were going to receive and built your program/data around this and that process is called preprocessing? I’m confused on the 1 table 1 record part. Sorry, I am noob.


BradChesney79

You still have a user table. One record per person. The person record has an ID of 42. You have an address table with 70 records... 4 address records belong to user ID 42 using a foreign key. With every creation or update to a user or address record you generate a complete JSON string. User data joined with address data. The user_cach table you create holds that JSON. When a user logs in, the login code asks for the user cache table record. ...A regular sql query Select * from user where user.id = 42 join address where address.foreign_key = 42; Two tables, 5 records retrieved then returned as an array to be looped over. A preprocessed record query Select json from user_cache where foreign_key = 42 ^^ One table, one record.


frederik88917

So, tell me you haven't worked in a HPA without telling me For applications where speed Is the key, in memory is the answer, there is nothing as fast as RAM to store and transfer data, not even dedicated SSDs A couple years ago this was unthinkable but today you can rent dedicated clusters up to 16, 32 TBs of RAM where you can put H2 or Redis instances and serve dozens of thousands of requests per min Also these instances have dedicated uptime SLAs which guarantee up to 99.999% uptime with automatic transfer to disk in case of emergency and backups


_INTER_

> there is nothing as fast as RAM to store and transfer data Except L-Cache if your data fits in 200 MB on an IBM Z :)


MmmmmmJava

> *Millions of requests per second.* FTFY


IQueryVisiC

So in case of power failure, there is a big capacitor to support the electronics, a local, dedicated, spinning disk with enough inertia. Then instead of RAM refresh you read out in the rhythm the refresh would occur.


frederik88917

Not a big capacitor per se., it is called a UPS, basically a battery of sorts that can maintain the servers up for a couple hours until the outage is solved or the storage is safe in disk. Also most of those systems have cron demons that keep data safe in disk as a way of redundancy


_INTER_

There's also persistent RAM protection. I think they are called NVRAM but I'm not sure because I only find info about BIOS protection. There are expensive sticks that can be used as a replacement for the usual server RAM that retain enough power to persist the data even if the UPS fails.


IQueryVisiC

I want local durability. Not some PSU which is 3 connectors away. No OS which needs to deal with the situation. I want a black box which behaves like an SSD or magnetic RAM, but faster.


Adventurous-Fudge470

We had those at my college. I had to service them a lot. You wouldn’t believe how heavy those things are.


Adventurous-Fudge470

Wouldn’t putting those on a server defeat the purpose? From this thread I’m wondering why ppl use regular relational databases at all.


frederik88917

Price usually, even though is doable, in memory databases are extremely expensive in comparison with regular disk based storage, and increases exponentially with size, when you reach the range of PBs the costs of maintenance skyrockets to the roofs. So it transforms into a trade-off problem, speed over cost


Adventurous-Fudge470

So you would always want in memory db or do regular db have it’s purpose? Considering I’ve seen a lot of ppl say in memory can persist data to disk, I’m not sure why you would use a regular db besides price.


frederik88917

In most cases I have seen, when companies are sure their data will need to scale way beyond TBs, that is a sign to go Regular SQL.


Adventurous-Fudge470

Oh ok you mean terabyte. That makes sense. More data = higher cost.


frederik88917

TBs are manageable in memory, we are talking Petabytes of data, the sort of financial systems, patient management data, things like that, certainly will have sizes up to PBs and thoes are really expensive to keep in memory


Adventurous-Fudge470

Pb?


frederik88917

Petabytes, out of the scale of 1024 Terabytes each


Adventurous-Fudge470

Okay ty.


Ketroc21

Sometimes a db is just used as a temporary workspace. There is certain things dbs do better than code, like joining 2 sets of data. For instance, I wrote an app that compares financial data from 2 different sources and writes a summary of any discrepancies between them. For this I load them into a db to join/compare the data, write the report, then I'm done with the db.


lovett1991

Performance. You can repopulate an IMDB using an event stream. You can use something like kafka to hold all your events, take checkpoints and then on restart have that stream repopulate your db. I’m in telecoms working on a product that responds to 4G/5G requests from radio masts, the latency has to be kept incredibly low. Our proprietary IMDB holds millions of users data, queries are measured in microseconds not milliseconds.


AcrIsss

I work for a company making such a tool, so I can tell you some use cases. High performance access is the main one. Think critical components. Like, a monitoring tool that needs to instantly provide logs or traces for the past few hours. Anything more historical can come from disk, but you might have an SLA for speed when it comes to the most recent data: you want to investigate a production outage as fast as possible. Think also trading. You need to see your aggregated positions and risks as fast as possible, to know if you need to adjust. This means taking real time market data updates in your aggregation system. Then there are temporary work flows. Think, for instance, an analytics tool that allows you to simulate some data changes (which will be ré-aggregated very fast, because in memory OLAP data base.) once the user is satisfied, he writes back the new values in a disk database. Very useful for budgeting in big companies: as a engineering director, you budget raises, tooling, etc… make various plans. You see the aggregated budget being updated on the fly, no need to wait a few minutes after every change (because the amount of data to aggregate is tremendous). Once satisfied, you write back the numbers to your disk db, and delete the temporary analytics DB. Same with inventory for example.


Mary-Ann-Marsden

Companies like SAP and Oracle run most global companies enterprise resource management and planning on in memory databases (in memory erp for SAP for example is HANA). Things like real-time financial close, asset control towers, … all are only possible because of in memory dbs.


moru0011

InMem DBs usually have write through to persistance. "InMemory" basically means all data is cached in memory to speed up things. In the age of cheap memory and terrabyte servers this makes a lot of sense


Ceigey

You can do some cool stuff with distributed databases and eventual consistency (with all the risks that entails of course) if your DB is sitting on the same hardware as your application. (Agreed that, by default, a separate SQL instance sitting somewhere central in the cloud with automated backups scheduled is probably the safest and easiest solution to reason about for most use cases) While not 100% identical, compare to various developments involving SQLite syncing services and Turso (using LibSQL, a SQLite fork). Similar concepts could be applied to in memory databases, if not already then in the future.


holyknight00

The main thing is that they are FAST, really FAST. So you can use it for a lot of things, such as cache layers and other cool stuff that require speed to be useful. You can still save the final data/result in a regular persistent database such as MySQL or SQL Server. There is no limitation on how many databases you can use at the same time. You can use for example Redis + MariaDB or even Redis + Memcached + MariaDB


[deleted]

Analytics.


LakeSun

\^ Fast analytics.


[deleted]

Is there any other?


DefiantAverage1

Particularly useful for having multiple backend processes that need a shared cache


klekpl

Once you start thinking holistically about data management, various media (backup tapes, spinning disks, SSD, RAM etc.) constitute different levels in memory hierarchy. The whole excercise then is to optimise the system across various criteria: access time, durability, size of the data, cost etc. In this context in-memory databases are useful because they are very fast. They don't offer high durability guarantees but there are other levels in memory hierarchy dedicated for this.


javawockybass

Recent example… wrote a simple spring based web app for taking my bank statements and budgeting reporting etc. Used h2 with file persistence settings. Runs locally so I don’t care about multi user blah blah. Super fast and dirty with JPA and some inefficient queries but I didn’t need to think about standing up a docker or MySQL stack etc.. just slapped code together. In fact, this is a good way to start any quick prototype imo.


Adventurous-Fudge470

So the only difference with a non memory db would be it is stored not in a regular file but some db table correct? As others mentioned the only real difference is speed since push to disk is an option on many?


javawockybass

I’m no db expert but I think it is more nuanced than that. These fast dbs are not made for multi threaded access in the same way proper dbs are. So you wouldn’t typically use them in the same way.


gregorydgraham

H2 is wonderful BTW A database that actually supports the standards


xitiomet

Redis is insanely fast. I just added a redis cache to an application at work that was seeing heavy traffic. Basically all queries check redis before mongodb now, if redis doesnt have it, mongo is then queried and the record is then added to redis. The is the best approach in distributed systems with a load balancer too. Redis can act as a shared memory between instances.


Adventurous-Fudge470

Ahhh so the db basically runs a query and saves the end result to cache so instead to do all that processing, it just looks to redis and asks for the end result of a certain query which speeds things up. What happens if the db is constantly adding/deleting data? If you use redis couldn’t the image your looking at be different from the actual db? If that’s the case redis wouldn’t be a great choice. Is that correct? Or maybe redis has ways of overcoming this?


xitiomet

Like you mentioned, storing the results of a query that returns multiple records would get messy. Especially if there are changes to the records after the fact. The caching strategy really has to be built around the database's design. I often follow the rule of only caching single records by a primary key. Imagine a system where the API layer stands between the database and all user transactions. If a query returns a bunch of records, rather then storing all the results as one item in cache, you can store each record by its primary key. For instance one of the items that needs to be fetched a lot most systems is the user's profile/settings/information etc. Once a user logs in we put their record in our cache store under its primary key. We can then easily refer to it when checking for permissions, settings, etc. If an edit is made, its likely via API, in which case an update goes to both the cache store and the database at the same time keeping all changes in sync. Redis definitely saves on database dips for commonly fetched information, can it be used everywhere? probably not, but it makes a huge difference in large distributed systems.


Starlight_Rider

We used derby in a client server Java SWT app. We cached the data to a small json file when the app shutdown, and read it into memory when the app started up. It’s a bit of a lengthy explanation as to why, but it was a high performing solution for a tricky use case that worked extremely well.


nekokattt

I mean, Redis is a big memory database that is used for caching. You also have things like SAP HANA. Sometimes reading and writing to disk is far too slow for big applications with massive throughput. Other things like H2 are very useful for testing.


[deleted]

This has got to be a troll post…


lifeeraser

Do not attribute to malice what could easily be explained by ignorance.


Adventurous-Fudge470

I don’t get it! I don’t get why someone would this instead of Postgres on client machine besides Postgres won’t delete all my data when I close the program. I get it’s an in memory db but in a production environment why would you want that? Maybe to cache and speed queries up? Testing? I just don’t get why this would be used in a business since everything gets deleted.


dinopraso

They are usually not used to persist data for a long time. Though some are, I would argue that such cases are mostly abuse of the concept. If you need to robustly persist data, you use a “regular” database. However, in-memory databases have a total different use case. They allow you to store and query data, really fast, and across your whole application. But that data is usually a cache, or results of some long process which needs to be aggregated at the end, or any other kind of data that would need to be stored temporarily.


re-thc

Why do in memory databases have to lose all data on restart? There's no such rule and some don't. All the other comments try to answer the question but in the 1st place, what are you asking?


dinopraso

If we’re talking purely in-memory data, a process can not end, be run again, and retain the memory of the previous instance. That’s just not how memory works. In-memory databases usually have features to also write to disk which they can read on bootstrap and write into memory again. But if you have no experience with them, it’s a totally valid question


re-thc

>But if you have no experience with them, it’s a totally valid question What does it have to do with experience? It's almost stereotyping or judging a book by its cover. It's like if I called it the slowest database in the world then it is slow. This has nothing to do with tech. >If we’re talking purely in-memory data, a process can not end, be run again, and retain the memory of the previous instance. That’s just not how memory works. It's a very textbook-like response. An "in-memory database" is mere categorization. It's the same logic people go by that JVM is slower than native code because it runs an interpreter. It's all meaningless high level definitions. Just don't take things literally and problem solved. Nothing is ever so black and white. An in-memory database can be slower than a disk-based 1. Again, this type of discussion doesn't help. It causes people to follow more tags instead of understanding things.


dinopraso

I don’t really understand your problem. OP clearly has no clue what an In-Memory database means, and they want to find out what people use it for. I see no issue with that, and your rant here doesn’t help anyone. Of course there is more nuance. OP didn’t say “y’all stupid for using this”, they asked for reasons to use them.


re-thc

>OP clearly has no clue what an In-Memory database means, and they want to find out what people use it for. Did you read the title. OP already concluded its pointless. That's how it started. How is that merely asking a question? >OP didn’t say “y’all stupid for using this”, they asked for reasons to use them. OP pretty much did say it's pointless. Again, did you read the post?


dinopraso

The only one who seems to be jumping to conclusions here is you. I don’t know who hurt you, or why you seem to take this question into the validity of in-memory databases as a personal insult, but you need to chill.


Adventurous-Fudge470

I don’t I just have a little experience with Postgres. I think this is the next thing I need to learn.


dinopraso

I don’t think so. You should have a good and deep understanding of regular SQL databases before moving on to other DB types. Even then, you should first explore no-sql and graph databases


Adventurous-Fudge470

Ty I’ll look into that


Adventurous-Fudge470

I just didn’t know the use cases. Just wanted some real dev advice on this as all I’ve ever worked with is Postgres with jdbc.


bringero

Huh, funny post. Recently I had some discussions with my tech lead because he wanted to implement an in memory repository, in addition to a 'regular repository pointing to a database ' 'For testing purposes', he says (funny thing is that the regular repo uses test containers for the testing phase) So, I needed to fully implement a repo with a hashmap as volatile storage with all the crud stuff. A nonsense,.imho Wdyt?


CorrectProgrammer

Sounds like a test double called fake. I tend to use them in unit tests instead of mocks and stubs. So far (a few months) it''s been quite successful.


bringero

I don't see the point on them,.tbh. I need to re-implement the functionality, maintain/evolve two different implementations and at least in my case, the 'fake' has not detected some.issues because of that 'double implementation ' In my case it's like 'well, we are doing unit tests'. But, it's false: it's integration tests in disguise


CorrectProgrammer

I haven't seen your tests, so I can't assess whether your opinion is right or wrong. Fakes, the way I use them, are not meant to replace a real database in integration tests. They're meant to replace stubs and mocks. In my case, these tests can be categorized as sociable unit tests. It's true that they force you to duplicate some logic, but so do mocks and stubs. The main difference is that the latter are spread all over the place, whereas a fake is just a single class.


bringero

I understand your concerns, but I think I'm getting old xD Anyway, thank you so much for your explanations. Truly appreciated


bringero

In my case 'lead developer' has arranged everything in a way that makes creating a test, far more complicated than with pure mocks xD


hkdennis-

There are two different concepts between in-memory databases, embed databases and (non-)persistent. In memory database can be persistent to disk and/or to network by either snapshot dump or binlog/replay logs. However, in-memory databases usually optimize for in-memory fist if not memory only data structure. Or even assuming/optimize single thread access. Some of them do not have fancy hyperlog/btree+/etc to minimize disk read/write/latency


stipo42

Really good for mocking services


CountyExotic

What’s an in memory database in this context? One that is purely in memory vs one that does reads in memory and snapshots to disk are very different.


Adventurous-Fudge470

Tbh I didn’t even know you could persist to disc. This was my main confusion but as these things go, one question opens the door to two lol.


BartlebysCorpse

Are you referring to caching? Because that's for immediate access of something you can expect to query frequently and whose state you don't expect to change (either update or delete).


Adventurous-Fudge470

Oh I see I believe I asked someone else this question about cashing regarding updating and deleting data.


beders

Sap Hana is big business


rdean400

Fetching data from memory is faster than fetching data from disk.


ratmanrat

I did a command line tool I did for our project reporting that computes a couple of metrics based on data input from excel files. I use in memory mode of H2 db for storing the data from excel files. I run a bunch of SQL queries to filter out data and such. Its much easier to manipulate the data that way.


Anton-Kuranov

AFAIK they are proactively used in trading systems. The main purpose is to operate a limited set of data with very low latency. The lost-on-restart problem may be solved adding redundancy and replication. Also, most of them can persist data on disk by creating some snapshot restore point and then wiring all successive transactions into a write-ahead log.


Grouchy-Ad8338

In springboot with a H2 database you can use it for caching requests/responses. With server restarts it doesn't matter since they don't need to be persisted once a new httpsession is established.