Yeah, it would be worse to adjust image saturation & save, than to make localized edits though because as guss mentioned, the format uses small squares to compress the image data, though I don't believe those squares could change size. My understanding was that they're always 8 by 8 pixels.
I know all this is outlined in tons of detail on wikipedia (http://en.wikipedia.org/wiki/JPEG#Encoding), but to shed a little more light on the process and refresh my own memory, here's a simplified explanation:
The image is first transformed into HSB color space (also known as Y Cb Cr), and downsampled into integer values allotting more bits for the brightness component. Then it's divided up into those 8 by 8 blocks, so you get nice localized arrays of 64 pixels per block.
These arrays are then fed through a discreet cosine transform, (one per channel I think) and you end up with 64 scalar values per channel of all the cosines that make up that 64 pixel block. This is the insane mathematics portion -- it can be the most difficult part to understand, but there are a ton of examples explaining how DCT's work online, and even code samples showing how they're written. The most advantageous thing about DCT's is the result is sorted from low to high frequency. This is nice because it makes it easy to choose the frequencies you think matter most for your compression, in this case it's the lower frequency information as it's what your eyes see more of at a distance.
Next, the result is fed through a low pass filter which is the big lossy component of the process. In this context it is called Quantization. All that really happens here is you divide the entire array by a scalar and round everything to integer values. This brings most of the high frequency information to 0 which you can throw away, and turns the low frequency information to small scalars that are easy to compress.
The last step is a combination of run-length encoding and huffman coding to compress those integer values. RLE & Huffman are both loss-less compression techniques.
It's really not all that difficult to write the algorithm as long as you use test-driven-development and build it one step at a time. And if one was going to attempt it, it's probably worth getting RLE and Huffman working first as those are a considerable amount of the work on thier own. =)
Wow, so yeah I'm like totally off topic but maybe I helped a couple people out with their coursework
Jack