Sorry, the short takeaway from that spiel is: saving them as condensed points (where each point/bucket represents multiple readings) works great.
Consider saving multiple zoom levels.
Like, if you want to be able to show from 0.25 second resolution to a full week's worth (because you're looking at the light cycle timing or something) and say your graph is 800 pixels/points wide, then:
one week scale = 7 x 86400 x 4 = 2419200 readings
0.25 resolution scale = 800 readings
1209600 / 800 = 3024 factor between the two (smallest-to-largest) scales
3025 ^ (1/(numzoomlevels-1)) = 4.96 = round up to 5x multiplier step between scales
So you'd have:
Zoom level 1 = 1 reading per point = 0.25 seconds = 200 second-wide graph
Zoom level 2 = 5 readings per point = 1.25 seconds = 16 minute-wide graph
Zoom level 3 = 25 readings per point = 6.25 seconds = 83 minute-wide graph
Zoom level 4 = 125 readings per point = 31.25 seconds = 6.9 hour-wide graph
Zoom level 5 = 625 readings per point = 156.25 seconds = 34.7 hour-wide graph
Zoom level 6 = 3125 readings per point = 781.25 seconds = 7.23 day-wide graph
and instead of keeping 2419200 readings at 0.25 second resolution x say 4 bytes/float = 10 MB (per channel),
you'd have 6 zoom levels x 800 samples x 3 summaries (min, max & average) x 4 bytes/float = 0.058 MB (per channel) = 99% memory saving (and instantaneous graph display, since those readings are already condensed = no need to go through those 2.4M readings)
On the other hand, if your software specification/customer says one day will do, and is ok with quarter-hour resolution, then... unless there's some benefit to you in exceeding requirements, or you're a bit of a perfectionist, or you just like doing a super-good job, or you enjoy this stuff, then: 96 columns is the go (but I'd track the minimum and maximum too, so that short events don't get lost in the consolidation).
What could go wrong?!?!
(hmm - so much for the "short takeaway"...)