It took me a while.
I am primarly a C developer so what i would expect: perform the division as long and then cast the result to the appropriate size so - following c rules - it should become 3661192999 -> 0xDA395F27 -> -633774297, but i don't care about the sign because i'm checking individual bits.
however the result is 2147483647 -> 0x7FFFFFFF aka the maximum value for an integer.
Is this expected? As i said i am primarly a C developer and maybe the C mindset is fooling me in this case. If it's expected, where can i read up about this?
B4X:
dim a as long = 239939944437253
dim b as int = a / 65536
log(b)
I am primarly a C developer so what i would expect: perform the division as long and then cast the result to the appropriate size so - following c rules - it should become 3661192999 -> 0xDA395F27 -> -633774297, but i don't care about the sign because i'm checking individual bits.
however the result is 2147483647 -> 0x7FFFFFFF aka the maximum value for an integer.
Is this expected? As i said i am primarly a C developer and maybe the C mindset is fooling me in this case. If it's expected, where can i read up about this?