I'm starting from scratch

LucaMs

Expert
Licensed User
Longtime User
"I'm starting from scratch" is a non-literal translation of the Italian phrase: "Ricomincio da zero" ("I restart from zero", literally).

A famous Italian film was called: "Ricomincio da tre"; the protagonist, when someone asked him why "da tre" (three) and not "da zero", said that three things in his life had worked out well for him.

After this long introduction, I decided to "get seriously" in the development of a complex project that I conceived 8-10 years ago but in a very indecisive and inconsistent way (a few days every year!), hoping that this time I won't immediately feel like giving up.

I begin by submitting a question to you (next post).
 

LucaMs

Expert
Licensed User
Longtime User
The project is client-server; I have to develop the server.

First question/doubt:
The current state of each connected user and other objects related to him.

I had thought (and implemented in many versions of the server) NOT to save them on mass storage, especially for a question of overall speed. Access to hard disks is slow. I am an old programmer, so I do not always have SSD mass storage devices in my mind and in any case these are certainly much slower compared to access to the central memory, commonly called RAM 😁.

It is obvious that in the event of a server crash, perhaps due to a power failure, the data not saved on mass storage would be lost. Is it worth the risk and avoid weighing down the server by making it continuously access hard disks / SSD?
 
Last edited:

epiCode

Active Member
Licensed User
Following three things will define:
1. 'In the event of a server crash". how often is that event?
2. How critical is the data loss?
3. How critical is the response time/speed or what is actual slowdown if regular read write operations are done?

Also you can use cloud storage which are scalable and have fast response time even in heavy load scenarios.
 

LucaMs

Expert
Licensed User
Longtime User
Following three things will define:
1. 'In the event of a server crash". how often is that event?
2. How critical is the data loss?
3. How critical is the response time/speed or what is actual slowdown if regular read write operations are done?

Also you can use cloud storage which are scalable and have fast response time even in heavy load scenarios.
1. Who knows?
2 Who knows?
3 Who knows?
😅 :confused:
(4) Cloud = other servers that will also have to write to mass storage devices.


Point 2: I am actually the one who should evaluate how important that data is.
These are games, so users could lose virtual money or scores in ongoing games.
 

Jeffrey Cameron

Well-Known Member
Licensed User
Longtime User
We wrote a cloud-based point-of-sale application utilizing Android tablets for the "register" portion. The application makes requests to our ASP.NET back-end via a custom API we implemented. The tablet holds non-volatile semi-static information (e.g. employee list, time-clock list, etc.) but everything else is "real-time" requested from the server which queries MSSQL server databases.

Given that context, our metrics indicate each request averages less than 200ms response time, with tens-of-thousands requests from all users in any given month.

My advice? Don't worry about storage access times or storage space. Those can be easily scaled if necessary. Focus more on economy of data transmission between client and server.
 

LucaMs

Expert
Licensed User
Longtime User
queries MSSQL server databases.
I realize that I have asked the question incorrectly, which could be interpreted as if I wanted to decide whether to save objects on mass storage (using serialization).

Obviously I am referring to any saving of "objects" on DBMS and consequently on mass storage.

"To save or not to save" 💀 all on mass storage (DBMS).
 
Last edited:

hatzisn

Expert
Licensed User
Longtime User
1. Who knows?
2 Who knows?
3 Who knows?
😅 :confused:
(4) Cloud = other servers that will also have to write to mass storage devices.


Point 2: I am actually the one who should evaluate how important that data is.
These are games, so users could lose virtual money or scores in ongoing games.

Follow Elon Musk's example for starship. Crash a lot - fly once. 🤣 Just kidding. Redundancy is the answer. I was reading somewhere that the way starship flies is that it uses three x86 dual core (???????????) processors that run redundantly in a special computer that runs a linux distro created by Space X (both). These processors, all 3 redundantly make the same calculations and compare their results continuesly. They all act redundantly and in case of one's failure the next takes on. I though it was a joke and maybe it is just a gossip, but after all he is Elon and he does not think the same way as most of people. So this is the answer. Have at least 2 vps constantly exchanging mqtt messages with json and you may make your game fly. If one is down the other will be up. Save every X minutes.
 
Last edited:

hatzisn

Expert
Licensed User
Longtime User
Premature optimization is the root of all evil (or at least most of it) in programming.

- Donald Knuth (info)

Was he drunk? Plan, create, check, correct and around we go. If correct is a bottleneck the cycle becomes an arc.
 

Sagenut

Expert
Licensed User
Longtime User
Was he drunk? Plan, create, check, correct and around we go. If correct is a bottleneck the cycle becomes an arc.
I think that what @Sandman is trying to say is:
If you pretend to prevent all the possible problems and scenarios that some code can create you will end up like the dog spinning around trying to bite his own tail.
Of course every good developer will try to think to the majority of problems and prevent them, but nailing the perfection at the first shot it's nearly impossible.
At a certain point you should publish your creation and then wait for the users feedback.
On that you will improve, add, correct.
Seeking the perfection (only to your eyes at that moment) could lead you to an eternal do/undo/redo cycle.
 

hatzisn

Expert
Licensed User
Longtime User
I think that what @Sandman is trying to say is:
If you pretend to prevent all the possible problems and scenarios that some code can create you will end up like the dog spinning around trying to bite his own tail.
Of course every good developer will try to think to the majority of problems and prevent them, but nailing the perfection at the first shot it's nearly impossible.
At a certain point you should publish your creation and then wait for the users feedback.
On that you will improve, add, correct.
Seeking the perfection (only to your eyes at that moment) could lead you to an eternal do/undo/redo cycle.

Maybe the language I used is kind of offending now that I read it again. So @Sandman I apologize. Experience tought me that correcting afterwards done mistakes takes much more time and thus money than planning ahead. But I believe that maybe the final solution must be hybrid between what @Sandman suggests and my aproach. And this is the agile development.
 
Last edited:
Top