GSoC 17 application - Processing for Android and Helping with the PDE

edited March 2017 in Summer of Code 2017

Hello! I am Sara, a Software Engineering student from Rome, Italy. Processing has had an huge contribution in my growth as a student, and I might say that I started being very passionate in coding when I discovered Processing, therefore it holds a special place in my heart. I used it to develop many games, data visualization applications, installations... you can find some of them here, on my website, if you want to take a look: click here I never actually contributed to it, but I'd love to do so. I've seen the list of the projects, and I'd love to try to do this one:

Processing for android: We are incorporating support for live wallpapers, watch faces, and VR (Cardboard/Daydream) in the latest versions of the Android mode. It would be great to see original applications of these new app types on the Android platform.

I also have experience with developing apps for Cardboard, so I'd love to give it a try. Maybe this could also go into Help with the PDE, since these kind of applications might also be used as new examples (:

So, should I directly suggest a project proposal here?

Thank you and have a nice day (: Sara

Tagged:

Comments

  • Hi Sara,

    Many thanks for reaching out, and for sharing your work!

    It would be amazing to have a proposal focusing on VR with the Android mode, there are many exciting directions to explore. A VR game easily comes to mind (and would be awesome), but it could also be something else (audio-visual experience, VR collaborative tool, etc.), or a combination of different things to realize some cool concept... Here you have a recent example of an Android Experiment project developed using the live wallpaper functionality in the mode.

    A more ambitious project such as this will likely reveal bugs and limitations in the Android mode, which would need to be addressed during GSoC in order to complete the project successfully (this is what happened during the development of Look Up, the Android Experiment I just mentioned).

    Feel free to post more concrete ideas here for further discussion, or to start working on a draft of your proposal. Here you have some guidelines about the proposal writing process, note that the application period opens on the 20th. The recommendation is to submit earlier, so we can give you feedback for you to refine it up to the deadline (April 3rd).

    Andres

  • Hi Sara and welcome! Thanks for sharing your website! @codeanticode is right, you are welcome to post about ideas in whatever form you like here for feedback.

  • Hello again (:

    I took some time to think about what could I do, and I came up with some ideas.

    I would like to ask if you'd prefer a project that is simple enough to use it as an example for documentation or if you'd prefer a more complex/complete project to show what the platform is capable of.

    For the first scenario I was thinking about a series of live wallpapers aimed at showing how different sensors could be used? Consider for example, showing an equalizer on the wallpaper based on ambient sounds from the phone's microphone, or a landscape that changes its appearance based on the temperature and/or light.

    In the second case I have some ideas I would like to share with you.

    • I'd love the idea of making a game/tool/visual experience with VR! However, I don't really see a way of interacting with it. Neither the magnetic button that many viewers have, nor gaze controls (meaning that, if you have a button in your VR world, you can push it by looking at it for a few seconds) are currently supported. I have some experience with raytracing, maybe I could make an example where I show how gaze controls can be implemented.
    • Some time ago I made a cardboard game that I exhibited during an event. The player was a bird, flying through an archipelago of islands, mimicking the movement of the wings with its own arms. The movements were sent to the phone with two Nintendo Wii controllers that the player held in its hands. Each game lasted about 5 minutes, in which the players had to fly around collecting small seeds hidden in the landscape. By bringing them to a central crater they could make a (procedurally generated) 'tree of life' grow taller. At the end of the event, we showed everyone the tree grown by the efforts of all the players, thus creating a little collaborative VR experience. This game was made with the only purpose of showing it during this event, and never distributed, so I could perhaps use this idea as a starting point for a VR game/collaborative experience.
    • Also, as a VR experience I'd also like to create an application through which users could explore shape shifting, mesmerizing geometries. That is what Processing is really fantastic for, and we should absolutely take advantage of this by showing how cool that would be in VR. Imagine this one: click here, or this one: click here in VR!

    Please let me know what you think about these ideas, I'd really like your opinion on what could be more appropriate for the project (: I'll be working on a prototype in the next days.

  • Hi, I'm leaning towards the VR ideas, specially the last one of creating a viewer of "impossible" geometries or landscapes that you can explore through VR. It would be very neat if users can contribute with their own landscapes in some way. But I'm only talking off the top of my head, feel free to suggest other possibilities!

  • It would be very neat if users can contribute with their own landscapes in some way

    You mean by adding a way to create particular geometries directly inside the app, or by letting users load their 3D processing sketches, or maybe .obj models?

    We could keep a library of user-created content accessible from the app, so any user can access them (:

  • @picorana

    I don't really see a way of interacting with it. Neither the magnetic button that many viewers have

    Are you familiar with the Google cardboard model that uses a set of magnets on the side of the viewer? https://vr.google.com/cardboard/ Implementing the VR to interact with the magnetic sensor available in most phones should be straight forward and encourage. The concept is quite simple and can be done in processing in Android mode. Having a physical way to generate a trigger will be paramount to enhance the user experience. I don't have experience with ray tracing but it is an interesting concept to explore.

    Thinking in a bigger picture, maybe you could generate a VR demo experience with the option for the user to hook their own physical trigger. Arduinos will come handy for this cases or to use a paired cell phone either using B2 or WiFi direct for example or even using the OSC library available in processing. I can think one could also use leap motion or myo arm technologies as well. You don't have to implement all these technologies but just creating an API where users can easily connect their favorite trigger. A recent VR experience for example, the VR viewer comes with a handy remote control, like a wii control. I believe this was the sony VR viewer... I have to check. The experience was enhanced allowing to actively interact with the scene.

    Another concept to explore for VR is to for the users be able to generate their own 360 experience. For example, if they go to an art gallery and they take a panoramic shot of one of the rooms in the exposition, or to take a bunch of overlapped pictures and then have some code to stitch the pictures together and make it available to the VR as a source scene for other users to explore or to share, for example in collaborations or even a way to advertise. This approach as you see, is using photo data instead of generating an scene. I have to admit I am not sure if it is possible to create an use a 360 image in VR but then, the data is there to create the experience so I don't see why not.

    Just to clarify, I am pitching in some ideas but I am not associated with the foundation.

    Kf

  • edited March 2017

    @kfrajer

    Implementing the VR to interact with the magnetic sensor available in most phones should be straight forward and encourage. The concept is quite simple and can be done in processing in Android mode.

    Haha I can't actually find this in the documentation, if you do find it can you send me a link? Maybe I just couldn't find it (:

    However, the implementation of this is done with the magnetometer of the phone, it should not be hard to do. But this solution is not used by everyone as not all the phones have magnetometers, so the raytracing solution is sometimes preferred as it doesn't need any additional sensor.

    I don't have experience with ray tracing but it is an interesting concept to explore.

    Oh raytracing is a standard technique used to interact from a 2D set of coordinates with a 3D world, and you can see this in a lot of cardboard examples. Also, raytracing is in a lot of 3D games as you are clicking on your screen (that has 2D coordinates) objects in a 3D world, and the program needs to understant on which object you clicked. In practice, every time you click in your 3D game, an invisible line (a ray, actually) is formed, starting from the point in which you clicked and perpendicular to your screen (so, going along the z-axis if you consider the x and y axis as the ones of the screen). In this way, the program can compute which object does the ray intersect and in which order, thus determining what is the object that you clicked on.

    More info on raytracing used for this purpose: click here

    In cardboard, the ray is used to determine what are you looking at, since also the screen of your phone is 2D but you are looking into a 3D world. The games often have a function that lets you interact with an object (or click on an object, for example) if you look at it long enough, say 3 seconds. In this way, you can interact without the use of any other sensor or device on your phone. This is what I was referring to as 'gaze controls'. Yes, it does need you to think your experience in a way that fits with you looking at something still for some seconds, but it's a tradeoff.

    Thinking in a bigger picture, maybe you could generate a VR demo experience with the option for the user to hook their own physical trigger. You don't have to implement all these technologies but just creating an API where users can easily connect their favorite trigger.

    I tought about this, but I understood that this was just a demo to show what Processing was capable of, and I also wanted it to be simple to use, so to not require any additional stuff to be used, like an Arduino (and a gyroscope/accelerometer with it). So, they should be not required, but the idea of creating an API to try to hook up whatever you want is sweet (: I am just a little scared thinking about how many differences in the calibration and the output of a lot of different accelerometers there may be <: D

    A recent VR experience for example, the VR viewer comes with a handy remote control, like a wii control. I believe this was the sony VR viewer... I have to check. The experience was enhanced allowing to actively interact with the scene.

    Oh yes, many headset manufacturers are doing this: HTC Vive link, Newest Oculus Rift link. And it's awesome. But they are incredibly precise and specifically manufactured for that device (and cost a lot of money), so we have some more challenges in doing this (:

    I tried doing the same thing with Wii controllers, as I mentioned in a previous post. The problem is, they don't have an easy way to communicate with them (they communicate via bluetooth), as when I did this, you had to have root privileges on your phone to access custom bluetooth communication, and I don't actually want to make something that requires root privileges on people's phones...

    Another concept to explore for VR is to for the users be able to generate their own 360 experience. For example, if they go to an art gallery and they take a panoramic shot of one of the rooms in the exposition, or to take a bunch of overlapped pictures and then have some code to stitch the pictures together and make it available to the VR as a source scene for other users to explore or to share, for example in collaborations or even a way to advertise. This approach as you see, is using photo data instead of generating an scene. I have to admit I am not sure if it is possible to create an use a 360 image in VR but then, the data is there to create the experience so I don't see why not.

    Unluckily for us, Google already tought about this: click here

    Thank you for all your inputs!

  • For magnetometer data manipulation, I was thinking about the ketai library (ketai.org). Or using the Android sensor classes directly using the provided API: https://developer.android.com/reference/android/hardware/SensorEvent.html
    https://developer.android.com/guide/topics/sensors/sensors_position.html
    https://developer.android.com/reference/android/hardware/SensorManager.html

    Notice Ketai also offers a demo to work with B2 and WiFi-direct. I am not sure if they qualify as robust APIs but ok for proof of concept.

    Thxs for the links of ray tracing. There have been some posts in the forum related to this topic when working in 3D scenes.

    I was exploring available apps in cardboard and there are few of them out there to try. Having a stitching image demo in Processing and to access their image output using a VR app will encourage people in the VR community to get involve with Processing and push for new and exciting tool development in this domain. It is just an idea and I was more like thinking out loud. I am hoping other forum goers weight in these concepts and make more suggestions or provide their evaluations. If this is related to the G Summer of coding program, then it will be your call if you want to pursue this or any other of the previous ideas.

    Kf

  • Hi there, great discussion. Thanks @kfrajer for your feedback!

    The possibility of using extra sensors/hardware to add interaction in VR is definitely very exciting, but for the purpose of this GSoC proposal, my recommendation would be to use what the mode currently supports, and build on top of it.

  • Also, just wanted to note that adding support for additional sensors/hardware to improve interaction in VR through a contributed library could be another GSoC proposal :-)

    Cheers!

  • Hi @picorana and @kfrajer, remember that the proposal submission process is currently open until April 3rd. Many of the ideas discussed in here could make for very good GSoC proposals, please let me know if you have further questions. The proposals can be reviewed and tweaked on the Summer of Code submission site until the submission deadline.

  • @codeanticode I'm working on it! I'll post it soon, thank you a lot!

  • @codeanticode This is for students @-) :bz It will be cool to participate otherwise. Definitely looking fwd to see the development of VR in the following months :ar!

    Kf

  • @kfrajer ok, I see, many thanks for your valuable suggestions anyways. @picorana: awesome, looking forward to it.

  • @codeanticode So this is a draft of the proposal: https://docs.google.com/document/d/1zwE7IOKllL3QS32VHw_eBnWpm7xkvQwadNXzmEFI940/edit?usp=sharing

    I will share an editable version into the submission system (:

    I'd be super thankful if you could share your thoughts on it!

  • Cool, thank you! I will go through it and add my comments & suggestions.

  • @codeanticode, I thank you a lot for taking the time to review my submission. So, you would focus more on writing a detailed tutorial to add to processing-android? I mean, I can change the whole project to be more focused on making a good tutorial (:

  • I think the code part would still be more important in this case. The tutorial could simply start as a step-by-step documentation on how the Cardboard Mode demo would had been developed.

Sign In or Register to comment.