As Software Architect, you will be in charge of designing, developing and maintaining the AR
/ AI assisted room capture of magicplan app, both for iOS and Android targets.
You will report directly to Sensopia CTO.
In cooperation with the Product Manager, you will define the long-term roadmap describing the evolution of the Reality Capture in magicplan.
In coordination with the Development team, you will maintain magicplan capture features,
both for iOS and Android, in coming magicplan app releases.
In close interaction with the Research team, you will assist them in integrating new AI based features, from the early stages of POC up to the productization inside magicplan.
In collaboration with the magicplan Design team, you will implement the future 2d and 3d GUIs necessary to a good user experience during the capture.
At a more technical level, your job involves:
Designing, developing and maintaining the key AR components working on top of ARKit (iOS) and ARCore (Android)
Designing, developing and maintaining the 3d representation of AR implemented on top of SceneKit (iOS) and sceneForm (Android),
Designing, developing and maintaining the real-time capture workflow to ensure the best user experience,
Implementing the latest development in AI to automate capture and assist the user.
Experience in building AR apps for mobile devices
Experience in developing End to End POC with a Research team
Experience in building 3d environments on mobile
Experience interfacing with design teams
- iOS / Android UI programming,
- C++ / Android / Objective C++ / Python languages,
- Knowledge of ARKit / SceneKit / CoreML frameworks, - Knowledge of arCore / Sceneform / NNAPI APIs,
The main job of the AR Architecture is to ensure the best room capture experience in
In order to do this, he has to master some mathematical 3d concepts:
- Projective geometry
- 3d rendering / AR Camera fundamentals including Camera Intrinsics and Extrinsics, as
well as anchor points
- Inertial Measurement Unit (IMU) and Vision sensor fusion
- Optimization techniques (Levemberg – Marquardt for instance)
In addition, it is good to be familiar with Computer Vision techniques like features extraction and features tracking. A plus is to have a good knowledge of OpenCV library.
The candidate must know how to implement these concepts on mobile environments (Android / iOS).
This means a good level of programming in C++ / Objective C++ / Java /Swift with XCode and Android Studio.
She/he will have to work with some specific frameworks offered by iOS and Android
- ARKit for tracking the AR experience on iOS
- SceneKit for managing the 3d Scene on iOS
- ARCore for tracking the AR experience on Android
- SceneForm for managing the 3d Scene on Android
Note: we understand that it is difficult to find someone with both iOS and Android expertise.
iOS is the primary environment we value.
In addition, the candidate will have to implement some Deep Learning models developed by the Research team to extract semantic information on top of the AR session. The candidate will have to be familiar in how to feed a Deep Learning model with an image and how to retrieve the important 2d information extracted from the model on the image and transform it into 3d information
The person should be familiar in the way of retrieving the video stream from an AR session through ARKit / ARCore apis and transform it into the correct normalized input to the model.
Finally, the candidate should be familiar in how to create and manipulate 3d primitive shapes in order to materialize visually the 3d scene perceived with ARKit /ARcore
Another part of his/her job will be to develop rapid Proof of Concept (POCs) to allow
experimentation on new models developed by the Research team, as well as new 3d user
interface experiences proposed by the Design team.
That means some experience in rapid development, prototyping and design sketching.