We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hi, first time posting here and I have what I hope is a simple question.
I'm new to Processing, and just as new to Java, but have a little bit of programming experience. Most recently I've dabbled in Python which isn't really much of a help here.
I am working with data (xyz values) that I export as a csv file from a piece of test machinery we have here at work. The data comes out of it really sloppy, so I wrote on little sketch that cleans it all up really nice into a format that easy to work with. I wrote another sketch that reads the tabular data and plots it in 3d, that I can then rotate around and manipulate to analyze visually. I works really neat.
Right now I am working with with several csv files. My current example has 11, and each file can have tens of thousands of rows of coordinates, one even had a touch over 200k lines. My code currently has two nested for loops, one that reads a file from a "filenames array" (code I got off processing.org), and then another for loop that loops through the code, reading each row and plotting it, and when it's done it reads the next file.
The problem is that it's slow and clunky. My computer lags down and rotating the trace is choppy at best. I have a few possible solutions, that I think might speed it up and make the sketch run more efficient. One would to see if I can store all of the tables in an array. That way all the values would be in ram and it would be sending a command to read a file off of my hdd (and I have a sdd, don't know how much slower it would run if the files were on a network dirve). Another solution would be to append all of the files into one super long table. This would cause other problems that I would very much like to avoid. An array of tables would be much more elegant. A third solution which I will work on in conjunction with the other two is to figure out a way to shorten some of the longer files. I don't need 200k lines of data to plot an accurate trace, so I'll need to figure out some sort of algorithm that omits excessive, non-essential data.
In python I recall that you could create lists, and list of lists, and lists of lists of lists, nested deeper than I could even attempt to keep straight in my head. So in this case I would make one list that represented a row, x, y, z, then I'd make a list called "table", that had all the rows in it, then I'd make another list called tables that had all the tables in it. Just don't know how to do that in processing. Accessing the data would not be as simple as it is in processing, though.