I've been participating in the Beta BTLive tests, so let me summarise the main considerations I can do.
The protocol seems good enough for streaming video with the present and future normal net bandwidth. So I don't think it would be worth spending more time improving it, unless you find some very worth integration with something like Adobe Flash Media Live Encoder or something.
Still, if you think you need to wait for better net bandwidths I think you should focus in improving the app settings and interface to give it as much power as possible to the independent broadcaster.
I can think of:
1) To have an interface like CamTwist but that can actually enable you to joint different video sources -with different live effects and setting applied to each - in a mosaic like total.
So, say you have 3 selection tools, normal selection tool that enables you to select a rectangle like area; free selection tool, a free form area; and set of points polygon (here you say how many points you want your polygon to have, and click that number of times to select each point of that polygon position). The little problem with jointing these different areas into a single screencast is of course, how would they all fit next to each other. So maybe just rectangle-like selections make sense, because they are much better to be fit next to each other.
But there's not just that. After having selected an area from a source you should be able to position it in the outcome side and decide in which way it's opacity level (and other settings...) would intersect with others.
Let's give an example, that will make it much more clear:
I want my outcome video to be displaying the numbers of national lottery from last week on the top left corner, and I know a website where I can capture that to put in my video. Because this will only change once a week, I can set this first "source port" to one video frame of a few seconds being captured once in a week.
I can see things becoming more complicated, because most websites are updated all the time, so it would not make it very reliable to put my "source port" set just to the full URL path to the area as it would be displayed in a specific browser with a specific display. But you can get the idea, being able to manipulate video you would get from a specific page source. I'm trying to get to the idea of anchoring to a specific thing in a website.
This is getting really confusing so let's move on.
But before, let's tell a bit more.
I would like you could just have different layers in word processors. I don't know why nobody yet brought this forth. Just think a bit for a while on this: to have as much layers (that could have settings like the opacity level and others set) as you wanted. Also you would be able to import pdf into your word processing layer. And not just that. Your Word Processor could have a very sophisticate OCR (Optical Character Recognition) engine so when you were importing your pdf into a specific layer of your word processor you could decide import just the text that would be filtered (with the OCR engine) of a specific font, size, colour, etc, part of its text.
With operating Systems, I can think of the same, the idea of layers and disjointed free-form or otherwise selected areas interacting with each other according to your specific intersection settings for them. And because in Operating Systems, these could correspond to different apps/processes that could be on the forefront or on the background, you would have to rethink it a bit all.
Again an example, but it is getting more difficult, you want to create a "port-event" (let's call it like that), that is, an event is triggered in your port when a specific RGB colour has at least 100 pixels in a set of disjointed areas that you selected from different places to become joint in the same port that will then trigger. But of course these "port-events" could be very different things from having more than 100 specific kind of pixels. They could be all sort of multimedia things. But it would make it easy to do holes
in your desktop that would not be active in your app but would for other apps that could, for instance, be only accessible in other computers.
2) Giving more power to the broadcaster, with very simple tools built in your interface app. Like putting a rolling text from left to right in a specific position at a specific speed, given a string of text with command tags that made it really easy to automate them, the strings.
3) Giving more power to the independent broadcaster, having enabling you to use proce55ing code that would run your video ports. And making the interface with the proce55ing much more friendly with some kind of virtual "Sinclair ZX Spectrum" keyboard with code words keys.
4) However problematic that may be it seems we are moving towards a total surveillance society system. We can think of products, like Apple products, that would make it very difficult to hack and would make it easy to upload or live stream reality contents well tagged in terms of space and time. People would have enough freedom to have them and birds would be also having them.
If you can have a kinetic energy clock watch, that doesn't use battery and keeps giving you the hours if you keep moving, you can also have "birdzooms" that will gather energy as they move and can consume enough of that for others to process it into an integrated, leveled, media content. Many AI algorithms could be developed to make this media contents seamless and integrated so that you could access them at demand.
Remember, in the future surveillance cameras may all give much more importance to everything that moves. Why would you want a surveillance camera to keep spending resources on static data/info? So they would just go around, zoom in and out to maximize new information added and not static one.
I know it will be difficult to make some sense of this all, but I hope you find it a bit useful, at least.
I am sending this to Bittorrent developers, London Hackspace, and also to Demis Hassabis [
demis3@gmail.com] (one of the authors of "Republic: The Revolution" computer game, and also to [
dan.osullivan@nyu.edu] (one of the authors of "Physical Computing" book). I may also try to post this to LabVIEW and Proce55ing communities.
I would like to add a bit of my ideas for a new world order and economy. But that is a different story so let's leave it at that.
I know some of these things will be really difficult to make sense of but please bear in mind the difficulty of conveying the underlying ideas and hopefully some people may find this post useful and make some of its parts more clear.