Memory effeciency problems.

edited May 2019 in How To...

I am doing some research into this....but particularly in processing....are there any special tricks? My biggest problem is when trying to load ints from a text file. example: tof=int(loadStrings("tof.txt")); if tof is an array of 10 million then this will take up 200 mb of memory.....if I just create the array from thin air...only 40 mb....so I think I need to use native java libraries....but I dont know if there is another way.....

Also wondered if using the bitset library would be a good strategy to cut memory down....I am working on a machine which wont let me expand the memory prefrences beyond 256mb.....I understand that bitset is slow...but slower than creating my own mapping of all values between 0-256.....which would be a boolean array of 2048(8*256)..... or something similar....actually I will have ternary values(unexplored,false,true) in a huge tree datastructure.

Tagged:

Answers

  • What could you possibly be using 10 million values for?

  • Answer ✓

    there are smaller things than ints. bytes, for instance.

    the extra memory usage in your first example is probably the intermediate string array.

  • I don't have experience here, but if you want a memory efficient way of parsing ints out of a text file you could try Java Scanner:

    https://docs.oracle.com/javase/tutorial/essential/io/scanning.html

  • My array is now up to 13344909...I am profiling the structure of prime numbers and adding that info to a large database as a strategy for prime factorization. I kind of have a rough theory about the general structure of prime numbers. Also this profiling could help to see other patterns. I am using the biginteger library .isProbablePrime(8).....and it takes a real long time...well like a million in one hour.

    yeah the problem is the string array never gets destroyed and yeah I can pack 20 ternary values into 1 int.....but I am not sure if byte shifting operations are expensive of not....I now int conversion of a byte is stupid slow.....also I wonder if having a bitset as a global array would help with speed....cause maybe people are creating tons of bitsets and that is causing the program to stall for garbage collection.

    when I try to change the memory to 512mb on a 64bit machine with 4 gigs of memory...it runs but stalls a lot...and when I press the stop button it says lost packets 55. or something like this.....I am wondering if others are having this problem or its just the security settings of the machine thats causing this. I am using P3.3.6.....happened on both 32 bit machine and 64 bit machine....I checked the processing issues and nobody has submitted a bug so for now I will just assume its the machine.

  • yeah I am doing the reverse actually.....splitting the semiprime into its prime factors.....by the way there is a bug with P3.3.6 and the memory expansion...cause I can expand the memory to 700mb with no problem but using P3.3.5 ....I will post this right now.

  • How did this project work out? Did you get it working, or did you end up switching languages?

    I'm curious because profiling for prime factorization felt like a bad fit for Java / Processing -- it felt more like a C thing. But that's "feels" -- I don't really know.

Sign In or Register to comment.