B4R Question transferring large files from B4A to B4R

RJB

Active Member
Licensed User
Longtime User
After updating an app to work with the latest ES32 boards (v3.x.x) transfers of large files (images up to 100K) started to fail. This seems to be due to the block size causing problems in the _NewData sub, specifically on return from the sub (stack problems?). I'm using B4RSerializator to transfer the blocks and related information (file size, name, etc.).
Reducing the block size to 1250 solves the crash but is very slow.
Are there any examples of fast transfer of large files? I haven't been able to find any.
Thanks
 
Last edited:

RJB

Active Member
Licensed User
Longtime User
Thanks for the suggestion. I've taken a quick look which prompted some questions about the whole way I have implemented the transfer. Can you help with the following queries please?

I'm using prefix mode and B4R Serializator. Sending from B4A to B4R.
The fields used via the serializator are: long, int, long, string and byte()

. How are B4A longs interpreted by B4R, i.e. 8 bytes in B4A, 4 in B4R
. How are ints interpreted, 4 bytes B4A, 2 bytes B4R
. How to calculate the MaxBufferSize at B4R end? (Max size of Byte() + (Max length string + 1) + 4 + 8 + 8) / 2 [as buffersize is specified in UInts]?
. If the size of byte() and string are less than the maximum then does prefix mode always wait for the MoreData delay (possibly multiple times)?
. How to read the serializator objects into B4R variables? e.g. byte() object into a byte() array, string object into a string without a 'programming mistake' warning, int into int/long, long into long/??

Thanks
 
Upvote 0

Erel

B4X founder
Staff member
Licensed User
Longtime User
Best to check the source code: https://www.b4x.com/android/forum/t...ceive-objects-instead-of-bytes.72404/#content

Longs are converted to unsigned ints.

If #CheckArrayBounds is enabled then you will get an error if there is an overflow. The max size depends on the content. It is not simple to exactly estimate the max size. Make some tests. Check the length of the returned array and make sure that the input buffer is large enough.
 
Upvote 0

RJB

Active Member
Licensed User
Longtime User
Thanks, I'll check that.
I had the size set very large, and had no error message.

Further testing is pointing to the file write stream getting too big. Is there a limit?
I'm using LittleFS and the free space used doesn't change until the file is closed. Flush doesn't seem to do anything.
 
Upvote 0

RJB

Active Member
Licensed User
Longtime User
Looks like the note above re writestream/ free space is wrong. Freespace changes when the written file reaches 4k so presumably the littleFS block size is 4k.
However the crash seems to happen at the point where the file goes above 4K so I am still investigating.
 
Upvote 0
Top