We are about to switch to a new forum software. Until then we have removed the registration on this forum.
I have a question about using video decoding in Processing. I saw a lot of informations on processing.org website, but I didn't find something to help me : I bought a Z Camera E1, that have a streaming protocol over TCP/IP, but this protocol is not standard, so I can't use standard streaming development.
This streaming protocol is simple : - First you have to send 1 byte to the camera : 0x01 - Then the camera send datas with : 4 bytes for the size of the data ; then this amount of bytes for data Each data sended is 1 frame coded in h264. (then I have to send an other 0x01 to get an other frame).
This information is on github page, at the bottom : https://github.com/imaginevision/Z-Camera-Doc/blob/master/http.md#streaming
The first step works fine for me, with processing : I send 0x01, the camera send me 4 bytes for the size of data, then some bytes for data : My reading of the size is the same that the amount of bytes that I receive next. Up to this point, it works.
But now, I have an array of byte, wich contains one h264 frame. I don't know how to convert/decode it to show this image on screen (or to make modifications before).
Are they some functions or libraries in processing to do this ?
Thanks for your help,
Have a nice day Sebastien
(I hope my english is not so bad ... :-) )