The answer might really depend on what you intend to do with these "slices".
Perhaps it is better to process each pixel as you read it, rather than grab them all at once and process them later.
Also, do you need to grab every slice, or only a couple of slices If only a few slices, you should not be too concerned about the speed - it should be pretty fast. I expect that there is significantly more to be gained in whatever the process is that you apply to the slices.
Last night I happened to be reading about
Buffer classes (whilst inspecting Memo's MSAFluid demo with particles), and Buffers have the potential to give you a speed advantage.
I've never used them before, but the idea is that native I/O operations can be performed with them, as well as bulk transfers. The impression that I get is that you can create a ByteBuffer as your base storage, which can be "direct" (allows native I/O operations), then take an IntBuffer view of the ByteBuffer, which you could then use to read in your row of pixels[] integers (there isn't going to be an equivaent operation for grabbing columns, unfortunately!).
What you do with your buffer (of bytes / ints) after that is going to be the critical part!
From the
ByteBuffer docs:
Quote:Direct vs. non-direct buffers
A byte buffer is either direct or non-direct. Given a direct byte buffer, the Java virtual machine will make a best effort to perform native I/O operations directly upon it. That is, it will attempt to avoid copying the buffer's content to (or from) an intermediate buffer before (or after) each invocation of one of the underlying operating system's native I/O operations.
A direct byte buffer may be created by invoking the allocateDirect factory method of this class. The buffers returned by this method typically have somewhat higher allocation and deallocation costs than non-direct buffers. The contents of direct buffers may reside outside of the normal garbage-collected heap, and so their impact upon the memory footprint of an application might not be obvious. It is therefore recommended that direct buffers be allocated primarily for large, long-lived buffers that are subject to the underlying system's native I/O operations. In general it is best to allocate direct buffers only when they yield a measureable gain in program performance.
A direct byte buffer may also be created by mapping a region of a file directly into memory. An implementation of the Java platform may optionally support the creation of direct byte buffers from native code via JNI. If an instance of one of these kinds of buffers refers to an inaccessible region of memory then an attempt to access that region will not change the buffer's content and will cause an unspecified exception to be thrown either at the time of the access or at some later time.
Whether a byte buffer is direct or non-direct may be determined by invoking its isDirect method. This method is provided so that explicit buffer management can be done in performance-critical code.
*extended pause whilst I go tinker with something...*
Okay, I posted some demo code on OpenProcessing. It doesn't (yet) have a comparison to Arrays.copyOfRange(), nor
System.arraycopy(), but at least shows how to get data into the buffers (if not how to exploit the I/O benefits of the buffers!).
Demo direct buffer code:
spxlSliceOfPixels-spxl