ARCore Renderer - Processing Android

Howdy, I'm Syam Sundar K and I would like to work with processing foundation for ARCore Renderer of Processing-Android Project. I'm an Android Developer with a track record of contributing to big time companies like Mercedes-Benz and Daimler. I'm very much comfortable in contributing to open source. In fact, I've contributed to a couple of organisations in Github. I'm a Game Designer and 3D Modeller as well. Been developing games with Unreal since my 10th grade. Also I'm very good at organising and documenting stuff.

If you would wish to checkout my projects here goes : https://github.com/SyamSundarKirubakaran.

Ever since the beginning of my high school I've had a huge fascination about developing augmented reality apps. But then ARCore wasn't introduced. But I've had a huge fascination towards it, Infact that lead me to learn swift for ARKit. Sooner than I expected, I attended Apple's ARKit Conference @ Banglore, India.

IMG_20180213_095340_823

Video:

IMAGE ALT TEXT HERE

Now that ARCore 1.0 is officially announced and being a native Android Developer, am highly pumped. BTW look into my Github Profile for the updates on the ARCore projects that I'm working on.. Will be pushing them soon.

Stories apart, I've already setup with Processing Android and getting myself familiar with the environment, also looking into the VR Renderer since AR Renderer is gonna have some similarities with it, as suggested by @codeanticode

Any suggestions or thoughts would be appreciated. Thanks

Comments

  • One possible approach to start thinking how the AR renderer could look like is to take some of the basic ARCore samples, such as this one, and try to image it re-written with a hypothetical Processing-like API.

  • Yes @codeanticode , I'm looking for efficient ways of wrapping it as a Processing-like API code bundle.. I'm experimenting with the code samples of ARCore. Taking in some of the functionalities of the ARCore API and trying to remap it with some temporary callbacks and methods of my own, similar to that of the ones in the Processing Foundation's References and Libraries

  • Hello and welcome, @syam_sundar_k

    Taking in some of the functionalities of the ARCore API and trying to remap it with some temporary callbacks and methods of my own, similar to that of the ones in the Processing Foundation's References and Libraries

    I believe this would be a very nice start (: I personally have used NyARToolkit in the past for AR - perhaps you can get some ideas from it. There is also this one: PapARt.

    While you get familiar with ARCore, I would also start figuring out a possible timeline for the summer.

    Looking forward to your next update!

  • Hi @picorana, thanks for pointing to these alternative AR toolkits! ARCore seems like a natural choice since it is the "official" AR SDK for Android, but as you said the other toolkit/libraries may give us good ideas about an easy-to-use AR API.

  • edited March 2018

    Hi @picorana , Welcome aboard.

    I personally have used NyARToolkit in the past for AR - perhaps you can get some ideas from it. There is also this one: PapARt.

    As suggested I looked into the Toolkit. Really an impressive one but I'm not sure how it works in harmony with Android Framework since it's a Library for processing-Java.
    I'm looking up if there are any other Libraries or Toolkits that enhances the Development process in AR for Android.
    Currently I'm over the Official ARCore Documentation and References since it is obvious when it comes to android. Of course, if any other libraries that you come across, please do suggest I’ll surely make it a point that I take a look..

    I would also start figuring out a possible timeline for the summer.

    Yes @picorana , I’m drafting my schedule and fixing time span for each stage of the project. Will ring a bell soon.. Thanks for asking..

    @kfrajer @shiffman It would be great to have your opinion as well..! Please do share your thoughts.

  • edited March 2018

    Hi @codeanticode @picorana , I'm half way through my research on building the project. Here are possible things that I would wish to discuss and the challenges that I might face while accomplishing my goals. My thought to be abstract:

    • The ARCore for the first time in Processing for Android will use the Camera Image i.e, for the most part that I've come across the Processing - Android, non of the example projects or classes(of course, I've probably not looked into all the classes until what I saw) that support a direct Image from Camera. VR probably uses gyroscopic data sets in real-time and It's purely built on virtual world but when It comes to AR, I should manage the Image Data Source plus Virtual object rendering simultaneously.

    • It's not advisable to use Global co-ordinate system for AR, as used in all the Processing projects so far. As long as I referred the documentations and some Official ARCore Video content, this was the one suggested. They advice using relative co-ordinates. Since the objects are placed along with the anchor points are highly relative in the environment Context => If you place an object in the scene in a horizontal plane. The object changes it's prospective based on the camera location. If the camera moves towards left, left side of the object is shown or when moving right, right side should. Moving forward and backward should zoom in and out respectively. Really need a mentoring Advice

    • Query - Importing polygon mesh in realtime, though polygons can be created using Processing like box() or rect(). I was wondering if any 3D asserts provided by the user will be accepted. Example, to apply texture onto a 3D object, I can use an image as it's texture, similarly is there any possibility that I can import an mesh that is exported from Maya or 3ds Max into the Processing code directly so that I can see it in the running environment.

    Tagging: @kfrajer @shiffman
    Thanks in Advance.

  • I am not familiar with the ARCore or the ARKit. I have seen some videos online. What is the current state of ARCore? What are people using it for and how are they using it? It is an interested project btw.

    if any 3D asserts provided by the user will be accepted. Example, to apply texture onto a 3D object, I can use an image as it's texture, similarly is there any possibility that I can import an mesh that is exported from Maya or 3ds Max

    Not an expert on OBJ files but I will say this: Not sure if any 3D asset will be readily accepted by Processing on the Android side. It depends on the design and the way these type of objects are handled which I am not familiar with. I can only say that exports from Maya are very likely to work. Daniel Sauter in his book Rapid Androdi Developmnet made a reference to this:

    Working in 3D, Object ( obj ) is a very popular and versatile file format. We can use an OBJ as a self-contained 3D asset and load it into a 3D app on our Android. All the textures for our figure are already predefined in the OBJ file, making it fairly easy to handle in Processing with our familiar PShape class. Yes, it handles OBJ files as well.

    Object files are not XML -based in their organizational structure, but they still contain data segments with coordinates for the vertices that define the figure and data segments that link to assets such as materials and textures to the file. The model we’ll work with was loaded from Google Sketchup’s 3D warehouse and converted into the Object format using Autodesk Maya.

    Sample file here: https://github.com/ketai/rapid-android-development/tree/master/ShapesObjects/ObjectFiles

    However, from what I have seen in the forum, not all obj files seem to be compatible - some have problems when they are loaded. But then again, I have only seen the bad apples... maybe the PShape class performs well or maybe the obj structure file needs to follow a specific format. That is something you should add in your proposal and investigate. For the initial prototype you should stick to a reliable source of object assets. I will probably follow the same source as used by ARCore.

    Ok, changing gears. In your second question in your last post:

    They advice using relative co-ordinates. Since the objects are placed along with the anchor points are highly relative in the environment

    So is it currently known what is the best approach to place and track these objects? They talk about relative coordinates and anchor points. Are the relative coordinates based on these anchor points? Let me re-phrase my question: What is the definition of local coordinate system and what is an anchor point - from your reference? Is it know if active sensors like gyros and accelerators are used in the process at all? or are they unreliable for this technology.

    Kf

  • edited March 2018

    Hi @kfrajer ,

    What is the current state of ARCore? What are people using it for and how are they using it?

    The stable version of ARCore ie., ARCore v1.0.0 was release about a couple of weeks before. The environment seems to be working pretty decent after the stable release. Definitely better than the preview release, released by Google last December. They are so fun to use and develop on. You can see some of the out comes of ARCore in the recent months here. Currently developers who were working on ARCore in preview release have definitely moved on to the stable release SDK channels. The rendering process is some what time consuming because of the object overhead ie., Texture appling over the object and it's change in prospective as the camera moves form point to point. There was a serious issue with the SDK preview known Object Drifting (the object doesn't stay in the anchor point where it's left, it moves around as the camera moves). Now it's over come in the v1.0.0 (I didn't experience it after the stable release). Still there are some bugs in the channel, which they claim to be an error in the System image. Here is one of the issues that I've raised in their SDK.

    Issue : https://github.com/google-ar/arcore-android-sdk/issues/231

    I can only say that exports from Maya are very likely to work.

    I thought if were are able to make an .obj to be imparted as the part of an .pde, so that these objects can be directly implanted into the AR scene as the rendering happens in real time. This would be so cool. (It should be feasible but I'm not sure what it takes for me to implement this, may be @codeanticode would suggest something to make this happen). This is similar to that of the users add a .jpg file to add texture over the three dimensional objects like box in .pde file which is rendered in realtime (I've seen it in this example)

    However, from what I have seen in the forum, not all obj files seem to be compatible - some have problems when they are loaded.

    This is what bothers me a bit, since there isn't problems for proper shape geomentic objects like box or rect. AR is mostly built for building fascinating implementations that involves objects that have complex geometric structures involving uneven face distribution and highly generic vertex clouds.

    I will probably follow the same source as used by ARCore.

    Yeah, May be I should stick on with what you say.

    What is the definition of local coordinate system and what is an anchor point - from your reference?

    Ok, this is a good question, Let me be elaborate: There are a number of important processes that are involved in rendering even a simple object in AR. They are:

    • Motion Tracking
    • Plane Finding
    • Light Estimation

    Motion Tracking is unessential for the current discussion so I'm skipping it. Plane finding is the most important part of all where in the appropriate plane is found so that the objects can be placed over those planes which are found but these planes contain a collection of vertices forming a point cloud, so each vertex in the point cloud is known as anchor points. If you'll have to to place an object over the plane, it should only be placed over the anchor point.(not to far away form it) if you place it far form it, it shall lead to some miscellaneous effects (technically you can't place it far form it. It is totally restricted after v1.0.0 by the SDK itself). So these anchor points are so important while placing an object.
    My point is, the object should show phase correspondence with the camera position (Sorry for being too technical) so they should be relative. Ok, let me put it this way,
    If you want to place a number of objects over a table, if create anchor points for each and every object placed in the table it shall lead to computational difficulty and shall eventually lead to performance throttling of the device. So it is best practice to create four anchor points for the table (one for each leg) and place the object relative to that anchor point. This way will involve making coordinates relative and no absolute world co-ordinates are involved in this act. Hope you get my point.

    Is it know if active sensors like gyros and accelerators are used in the process at all? or are they unreliable for this technology.

    Data from Gyroscope, Accelerator and the Realtime camera Image (the rendering of objects is done and placed over the image. however, image has to be sensed for every frame so that the correct location to place the object can be found out) is essential for smoother and seamless working.

    Thanks kf. Those really helped me explain things out and the link that you suggested

    Sample file here: https://github.com/ketai/rapid-android-development/tree/master/ShapesObjects/ObjectFiles

    will surely take a look. Thanks a lot for your time.

    :)

  • My point is, the object should show phase correspondence with the camera position (Sorry for being too technical) so they should be relative. Ok, let me put it this way, If you want to place a number of objects over a table, if create anchor points for each and every object placed in the table it shall lead to computational difficulty and shall eventually lead to performance throttling of the device. So it is best practice to create four anchor points for the table (one for each leg) and place the object relative to that anchor point. This way will involve making coordinates relative and no absolute world co-ordinates are involved in this act. Hope you get my point.

    I guess that, if we are talking about rendering a huge number of objects, we also have to take into account the difficulty of actually rendering the meshes, more than computing the anchor points. I mean, will the "throttling" actually come from the number of anchor points, in this scenario?

    I believe using a relative coordinate system totally makes sense, and also simplifying the number of anchor points would be nice regardless of the computational cost. How would you identify where the legs of the table you are talking about actually are? Are the planes identified directly by ARCore?

    I thought if were are able to make an .obj to be imparted as the part of an .pde, so that these objects can be directly implanted into the AR scene as the rendering happens in real time. This would be so cool. This is similar to that of the users add a .jpg file to add texture over the three dimensional objects like box in .pde file which is rendered in realtime (I've seen it in this example)

    I don't know I am not understanding thoroughly, why would you have an .obj into a .pde instead of loading it as a resource? I don't get the "implanted into the .pde as rendering happens in real time" part. Please, correct me if I'm wrong and did not clearly understand, but an obj is just a collection of triples that describe vertex positions, normals, possible texture coordinates and more, so ideally you could actually "implant" something complex into your processing sketch instead of loading it from an external file and then parse it directly from inside your sketch, it's just not practical.

    Still, I believe that starting from basic 3D PShapes, like simple boxes, would be a nice starting point (:

  • Hi @picorana,

    will the "throttling" actually come from the number of anchor points, in this scenario?

    Actually, You are right, it's not caused just due to the anchor points, there are 'n' number of factors that may cause throttling in AR, It mainly depends on the device that the application is running on. If the CPU and GPU is able to handle all the hectic tasks which we are about to through at it, then this may not be an issue. But this happens to be a problem: Consider,

    If Blocking config is used, the frames are got in the same rate the camera runs. Now consider, the anchor points has to be remapped for each and every frame because the probability that the user has moved his camera is high (Motion tracking should also be done). Now if more anchor points are used more mapping has to be done for each and every frame. This shall lead to complexities.

    How would you identify where the legs of the table you are talking about actually are?

    I just tried to give an example, just tried it in a better way to explain the need for relative co-ordinate system. We can't identify the anchor points for the legs unless we place the table into the scene. If placed the anchor points are computed by itself based on the location where it's been placed.

    Are the planes identified directly by ARCore?

    Yes, Plane finding is one of the important steps involved while rendering for AR. I don't say it's detected automatically by ARcore. There are small code snippets for finding planes in the given scene using ARcore.

    I don't know I am not understanding thoroughly, why would you have an .obj into a .pde instead of loading it as a resource? I don't get the "implanted into the .pde as rendering happens in real time" part. Please, correct me if I'm wrong and did not clearly understand, but an obj is just a collection of triples that describe vertex positions, normals, possible texture coordinates and more, so ideally you could actually "implant" something complex into your processing sketch instead of loading it from an external file and then parse it directly from inside your sketch, it's just not practical.

    Oh sorry, I've made a mess explaining it (realising it only by now). I just meant what you said now. Place the .obj as a resource and access if when it is placed inside the scene. I didn't mean placing it inside .pde file. I shouldn't have used the word "implant" that is what should have caused mis-understandings. My bad.

    Still, I believe that starting from basic 3D PShapes, like simple boxes, would be a nice starting point

    Yes, I myself have the same thought. Lets go with the simple ones first.

    Thanks a lot for your Time.
    Regards,
    Syam Sundar K

  • With regards to the relative coordinates, we don't need to figure a solution right now, but it would be to think how we can make it as simple and straightforward as possible, so new users can focusing on creating their AR scene and not in the technical details of defining the planes, etc.

    For example example, if the AR renderer introduces some kind of Anchor class, we could attach any PShape object to it (either defined algorithmically, or loaded from an OBJ or another 3D file), which would automatically resolve relative motions by offering its own translate/rotate/scale methods:

    Anchor pt;
    PShape box;
    PShape sphere;
    
    ...
    pt.attach(box, 0, 0, 0);
    pt.attach(sphere, 100, 100, 0);
    ...
    // Applies a translation of 500 units along the Z axis of the coordinate system
    // defined at anchor point pt, followed by a 180 degree rotation around the 
    // X axis
    pt.translate(0, 0, 500);
    pt.rotateX(PI);
    

    Just throwing out an idea, of course there would be many other ways to do it.

  • Hi @codeanticode ,

    Just throwing out an idea, of course there would be many other ways to do it.

    I myself had the same idea.

  • One thought I had reading through this thread is that, because there are so many details to consider, it might be a good idea to think about a couple test apps to create to support these additions. Then you can get a sense of whether or not the API is too complex for the Processing audience. It may not be important to allow all options in the initial release, but just to set some good defaults so that people can see how the core functionality might work without too much hassle. As you write your timeline for the proposal, I suggest building in some time for testing and refining the API.

  • edited April 2018

    Minute 0:35 to 1:10.... This is my first request for features... or at least, a cool demonstration: 3D graffiti

    Kf

    Keywords: kf_keyword Augmented Reality ARcore Android GSOC

  • Hi @kjhollen,

    it might be a good idea to think about a couple test apps to create to support these additions.

    Yes, the best way to follow while implementing these kinds of Renderers is to test simultaneous as a module is been implemented. And sure that I'm gonna follow this strategy. I make sure I do a couple of test apps to test the functionality. Sure thing, point taken.

    It may not be important to allow all options in the initial release, but just to set some good defaults so that people can see how the core functionality might work without too much hassle.

    Yes, I'm not targeting on the huge target demographic of all features of ARcore into the API, I upon the most standard ones at first.

    As you write your timeline for the proposal, I suggest building in some time for testing and refining the API.

    In fact I've allotted certain time span just for testing in my coding period. No worries!

    Thanks a lot for you opinion @kjhollen.! :)

  • edited March 2018

    Hi @kfrajer,

    3D graffiti

    Awesome idea, Really impressive.! Sure that, this involves a lot of lateral Image processing and Machine learning combined with AR. Will be a really challenging task for me if I go that way since I'm not an expert in those areas and I'm not even sure if there is native support for this idea in the ARcore. Sure will add this as an incremental upgrade after even more stable release comes out since currently we are just in v1.0.0 [if there exists support for this in the future versions]

    Thanks a lot for sharing this cool Idea..! ( Big fan of Coldplay BTW :) )

    New Update:

    After few research, @kfrajer it turns out you can't draw in the air that can be captured as an sketch in AR, instead you can use the scene in the run-time shown in your mobile as a canvas to make sketches that appear in the real-world.

    Check out : https://www.youtube.com/watch?v=7SwZUNDsWaM for clear and a better understanding.

    I don't see this as a part of basics of AR but definitely worth noting it. We are just getting started with the AR for Processing and there are of-course a lot of discrepancies that has to be over come to make the API consistent in it's working. I definitely have my eyes on you idea. No worries.!

    Thanks again for giving me a spark of this concept.!

  • Hi all,

    I've made a draft of my proposal. Highly anticipating suggestions from your side.

    https://docs.google.com/document/d/1ajkw6x30h9rLFvzejr1G27sxT6oxhEl3uphGzVPynsY/edit?usp=sharing

    @picorana & @codeanticode - Anticipating your words the most.!

  • Okay, as you have seen, I posted some suggestions on the document (:

  • Thank you so much for your comments @picorana , I've made all the suggested corrections. Every comment was really helpful.. Thanks a lot.! :)

    @codeanticode - Waiting for your suggestions..!

  • going through it now :-)

  • edited March 2018

    Thanks a lot @codeanticode, will go through it soon.. :)

  • Hi @codeanticode,

    what does it mean "enhance environmental understanding" in precise terms?

    What I meant here was the second step of the core implementation. The core maps and keeps track of the background with the help of IMU (Inertial Measure Unit) and data from Gyroscope and Accelerometer. My objective is to find and form the projection and the view matrices correctly so that it helps in better tracking of the scene (during and after rendering). Hope I've got it clear.

    Screen Shot 2018-03-22 at 9.58.00 PM

    Sorry, If I did not explain everything in a detailed manner in the proposal. I just wanted to keep my proposal as simple, small, crisp and concise as possible.

    it is always to add some buffer time in case some aspect takes longer than anticipated.

    BTW, @codeanticode, regarding the buffer time. Can you tell me how exactly to add some buffer time in my timeline, because it can really help me if my estimations deviate a bit during development.

    Thank you so much for the comments @codeanticode. Hope you'll look into my Replies (I've asked a few questions as well, hope you'll help me figure it out).

    Thanks a lot.!

  • Thanks @picorana and @codeanticode, I've added some buffer time at the end of my timeline. I'll be submitting my proposal in the GSoC portal by today.

    Thanks a lot for your support. Really blessed.! :)

  • edited April 2018

    If anyone would wish to check out the updates on my work with ARCore, get it here : https://github.com/SyamSundarKirubakaran/ARCoreWorks

    @kfrajer - 3D graffiti - the feature you asked for - Google just released an app using ARCore that performs the same - looking into its source code as well. Thanks for letting me know about this.!

    Link: https://play.google.com/store/apps/details?id=com.arexperiments.justaline

    Thanks.!

Sign In or Register to comment.