We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Good day, Community!
I am thinking about project idea to submit and right way to do that. I look on: - https://github.com/processing/processing/wiki/Project-List - https://github.com/processing/p5.js-web-editor - 2015 year projects on https://www.google-melange.com/archive/gsoc/2015/orgs/processing - http://shiffman.net/2013/09/24/gsoc/
Prologue. Three months ago my team and me started a very exciting project TuSion - it's on Art, Perception Art and Op Art themes. At the beginning we decide to use Processing as a awesome instrument to prototype Generative Art apps both as Perception Art. The main idea is to create interactive art neurofeedback protocol between 2+ people based on EEG data measured from brain via neurointerface. Working scenario: a) you put on an neuroheadset, b) then take a smartphone for example, c) run TuSion app, d) choose single or multiplayer mode, e) open an art object (we call it 'tuse') and f) start play with it. By the way that tuse itself will play with clients' perception. Project page is here: http://fb.com/tusionapp. Here I see a perception plasticity improving side. Today I am a PhD student with theme in neurofeedback protocols (Physiology) at IHB RAS with CS background and engineering of Brain-Computer Biofeedback protocols - it's one of the dominant interest.
I am new in open source big projects development process so what is the right way?
Today I see 3 project ideas:
Signal visualizing tool. A tool/plugin for PDE which is working fine with time series signals and visualize it. It word be good for developing brain-art and science-art projects.
'VR + brain' samples. VR is space where every parameter of around space can be changed and still there is no samples on that them inside sample packs.
Connector with OpenViBE soft which is a very popular opensource engine for testing and developing brain-computer interface experiments, demonstration and apps.
Now it's useful to make issue fixing like that https://github.com/processing/processing/issues/4736?
Any comments? best,
Comments
Also if someone wants to build a cool generative art on brain data - welcome. Co-creation and co-working are great ^^ PM!
As a result I posted BrainWave Lab Tool inside GSoC cabinet. That's about tool for engineering neurofeedback and Brain Art sketches which used wide-used neurodevices.
@j0k thanks for your BrainWave proposal. By "tool" do you mean library? Library extensions to Processing that facilitate communication with hardware devices are suitable for GSoC. I look forward to reading your proposal.
Can you say more about potentials of this platform? Could algorithms there be implemented in processing or would it be more of a tandem interaction between these platforms?
Another question is the accessibility of the headset and the learning cureve to use it. I could envision having a VR experienced influenced by neuro-activity. I have to say I know nothing about neuro technology or information obtained from neuro-physiology.
In your project idea above, your android device acts as the middle-man gathering data and displaying it on its screen? Using a stand alone computer would be the place to start in my opinion.
Please note I am not associated with the Processing Foundation. I just found your idea cool and futuristic.
Related to exporting apps, the issue is known and we can only make requests to fix them and eventually there will be handle. Or somebody in the community will take the initiative to address it. It boils down to priority...
Kf
@shiffman, right question << "By "tool" do you mean library?"
Now I understand that mean combination of library and tool. I mean both, jar library and GUI tool. I will correct the application description.
@kfrajer, first I will describe OpenViBE.
OpenViBE is written in C++ but there is a way to implement algorithms in MATLAB and Python which will work inside OpenViBE scenarios. Here is a simple example of working process with OpenViBE. Some months ago there were 30+ different supported neuroheadsets including Emotic EPOC, Mitsar EEG, OpenBCI and Neurosky. OpenViBE has 'LSL Exporter' component for outside signal transferring. So if you want to use processing - it would be more of a tandem interaction between Procesing and OpenViBE.
Another question is about the accessibility of the headset and the learning curve to use it. What do you mean? I don't understand the questions' aspects. If the question about HOWTO to start develop BCI I think that there are two different best ways to do that.
Also google projects on github using keywords: neurosky emotiv bci processing. Just an example.
The principle difference between these two cases: if you decide to use neuroheadset with passive electrodes (Emotic EPOC, Mitsar and all professional EEG devices) all your practical experiments will take lot of time. BCI engineering and perception app developing - it's programming, testing and making lot of experiments with choosing the right experiment design.
I still think that don't understand your question :) Ok.
About middle-man in TuSion case. Plz, see this video. In TuSion the Android app works as standalone app without any desktop processing part. Also it useful to make a standalone desktop app for making installation in museum. We have agreement to make stand inside Museum of Optic Illusions. If you are interesting in that - join us !
<< Using a stand alone computer would be the place to start in my opinion. I also think so. In GSoC application the purpose is to make the best for desktop BCI engineering process. Android app - it's the second step.