Skip to content

Latest commit

 

History

History
100 lines (60 loc) · 7.45 KB

README.md

File metadata and controls

100 lines (60 loc) · 7.45 KB

Made with Unity Build status GitHub repo size

Android Code Coverage

Contents

  1. Aim
  2. Scene captures
  3. Project Log

Promo video: https://drive.google.com/open?id=1vGEUJYrwM7EHdV0KzXYitCrW3WXj-J0N

Tech stack:

  1. Unity 3D
  2. Unreal Engine (for Level of Depth-LOD generation)
  3. Adobe Premier Pro
  4. Adobe Photoshop
  5. Autodesk Maya

Aim

We aim to combine music therapy and mood induction procedures in order to create immersive virtual environments which engage the user while we administer music therapy. It has been research driven () that proper administration of music therapy can help in reducing anxiety, depressive mood, negative thoughts etc. We believe that, given the reluctancy of some part of the population to imagine such a thing can happen to anybody and thereby approach medical care professionals, such tech based solutions as ours can go a long way in helping them while maintaining their privacy and anonymity, thereby keeping them confortable thoughtout the entire process.

Scene captures

Scene 1: An abstract alien environment designed to disconnect the user from reality, and take them into a world that is quiet, serene, and devoid of the rush of life.

Scene 2 (more work to be continued after the hackathon): Journey to our attempt at recreating Atlantis. More additions to follow soon.

Project Log:

Development phase

[10/20/2019 7.02 AM]

Testing of both scene 1 and scene 2 successful (built on Google Cardboard with minimum required Android API level 19).

[10/20/2019 5.30 AM]

A very basic implementation of scene 2 finishes and build tested.

[10/20/2019 2.30 AM]

Work begins on scene 2. We are trying to emulate some underwater fantasy scenes.

[10/19/2019 7.30 PM]

Scene 1 testing and deployment checked. In order to achieve a spatializer sound experience, Oculus SDK for spatializer sound effects has been used. Update of the repo paused due to huge size of the project files.

Beginning work on scene 2 and on the website. The application has been tested on Android, built with a minimum Android API level requirement 19 for Google Cardboard, in order to allow a majority of Android users to use the application with modern smartphone devices.

No lag or FPS drops observed. The application now renders in a single pass per frame on the mobile, implying both the eyes are rendered simultaneously, thereby boosting performance.

[10/19/2019 12.20 PM]

Demo video created.

[10/19/2019 11.40 AM]

Solutions we found to the problems faced in the previous scene:

  1. Occlusion Culling: Don't render what you don't see. The image below shows the Occlusion map generated for the current scene. We have designed our scene in a way to ensure maximization of Occlusion.

  1. LOD (Level of Depth): Unreal Engine supports a mechanism to divide a mesh into several simpler meshes. The farther the renderer (the camera) from the mesh, the simpler (less number of vertices, hence requiring less computation) the mesh becomes.

  2. Static/Dynamic Batching: Similar objects are drawn on the screen together, making sure their textures, materials etc. are loaded only once.

  3. Baked lighting: Unity's system of pre-calculating the lighting in the scene, and baking (merging the lights and shadows) onto the very texture of the mesh under consideration.

The above factors (along with other several) allows Unity to deploy the scene on Android.

[10/19/2019 6.46 AM]

Development begins for a new scene.

[10/19/2019 5.53 AM]

First four build tests FAILED! Build APKs available in 'product/Forest Scene', with additional instructions to download and extract the Unity project.

The basic scene got built, and it looked like the following:

Now that we run the application on a Google Pixel, we see the main problem accompanying deployment of high-end graphics on Android devices: the amount of requests the application makes to the GPU to render the application. After a bit of search online, we find the following to be the problems:

  1. The above scene, simply put, is intensive. This implies it has lot of vertices (and polygons) for the GPU to render.

  2. Lighting and shadows takes computation power (lots of it) to render.

  3. Imagine a highly detailed object (~ 1000 vertices) in the scene. When the player sees that object from a considerable distance, they don't see the details (just the normal shape and some texture). But Unity renders the entire mesh, textures, and lighting as if you were viewing the object up close. This assumption takes GPU resources.

We look to minimize these in our next commit. The core of these issues is the fact that mobile GPUs have to undergo stereo-pass rendering (rendering one frame twice, one for each eye) in order to run a VR application on Android. The increased load leads to FPS drop, latency, and distorted graphics.

[10/19/2019 2.21 AM]

  1. Initializing a basic Unity 3D project and beginning development of a simple VR scene.

Sharpening our tools [Planning phase]

[10/18/2019 10.00 PM]:

  1. Collection of research articles that may help to generate effective virtual environments.

  2. Testing of deployment of a simple Flutter application on ZEIT.