-
-
Notifications
You must be signed in to change notification settings - Fork 297
Finalize AR library functionality and API #512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@benfry I have been implementing a tentative minimalist AR API in PApplet around ARCore, following up initial work from last year's GSOC. The ARCore API is significantly more complex than GVR, with several classes for trackable surfaces and point clouds in real space, user-defined anchors associated to the trackables, pose transformations, etc., so this was trickier that implementing the VR library. I'm also pinging @stalgiag, who's is leading the XR library for p5js. My goal was to find a small number of methods that would be enough to use Processing's drawing API to create AR scenes easily and without the need of any additional classes in the API. These methods should support at least two basic elements in AR: trackables and anchors. ARCore tracks planes and point clouds, but I restricted the API in AR library to support only planes, as I believe they cover most situations. Users cannot create trackables, since they are detected and handled automatically by ARCore. Anchors are essentially positions fixed in relation to an existing trackable surface. One way of thinking of anchors is as matrix transformations that the user pushes as desired. Anchors can be created and deleted by users, and creating anchors from a screen touch is important since it allows interaction with the AR scene. So, I came up with the following methods:
This is a total of 14 functions (counting the overloaded versions as one), so not a small number, but given the complexity of the ARCore API, is not too bad. I tried to keep them as "low-level" functions that users could apply to build their own custom AR classes. For advanced users, it is also possible to access the underlying ARCore objects and use ARCore's API directly. With these functions, one can create simple AR scenes with several objects attached to trackable planes and touch points with minimal amount of auxiliary variables: import processing.ar.*;
float[] points;
PMatrix3D mat;
float angle;
int oldSelAnchor;
int selAnchor;
void setup() {
fullScreen(AR);
mat = new PMatrix3D();
}
void draw() {
// The AR Core session, frame and camera can be accessed through Processing's surface object
// to obtain the full information about the AR scene:
// PSurfaceAR surface = (PSurfaceAR) getSurface();
// surface.camera.getPose();
// surface.frame.getLightEstimate();
// No background call is needed, the screen is refreshed each frame with the image from the camera
lights(); // lighting is automatically corrected to reflect light intensity in the real space
if (mousePressed) {
// Create new anchor at the current touch point, save old anchor id to delete
oldSelAnchor = selAnchor;
selAnchor = createAnchor(mouseX, mouseY);
}
// Draw objects attached to each anchor
for (int i = 0; i < anchorCount(); i++) {
int id = anchorId(i);
if (oldSelAnchor == id) {
deleteAnchor(i);
continue;
}
int status = anchorStatus(i);
if (status == PAR.PAUSED || status == PAR.STOPPED) {
if (status == PAR.STOPPED) deleteAnchor(i);
continue;
}
pushMatrix();
anchor(i);
if (selAnchor == id) {
fill(255, 0, 0);
} else {
fill(255);
}
rotateY(angle);
box(0.15);
popMatrix();
}
// Draw trackable planes
for (int i = 0; i < trackableCount(); i++) {
int status = trackableStatus(i);
if (status == PAR.PAUSED || status == PAR.STOPPED) continue;
if (status == PAR.CREATED && trackableCount() < 10) {
// Add new anchor associated to this trackable, 0.3 meters above it
if (trackableType(i) == PAR.PLANE_WALL) {
createAnchor(i, 0.3, 0, 0);
} else {
createAnchor(i, 0, 0.3, 0);
}
}
points = getTrackablePolygon(i, points);
getTrackableMatrix(i, mat);
pushMatrix();
applyMatrix(mat);
if (mousePressed && trackableSelected(i, mouseX, mouseY)) {
fill(255, 0, 0, 100);
} else {
fill(255, 100);
}
beginShape();
for (int n = 0; n < points.length/2; n++) {
float x = points[2 * n];
float z = points[2 * n + 1];
vertex(x, 0, z);
}
endShape();
popMatrix();
}
angle += 0.1;
} Let me know if you have any comments. |
An option to reduce the number of PApplet functions could be to put all the current trackables into a trackables array, similar to touches, so one could do:
Anchors are essentially a transformation matrix together with extra additional info, such its status, so I'm thinking if it would make sense to define a PAnchor class to get away with all the anchor functions I define previously, so that:
Just some thoughts... |
In the end, I went for an object-oriented API that's consistent with existing libraries (sound, video, etc). This eliminates the need of adding new API to PApplet, and makes handling trackables and anchors more intuitive (I hope). Commits d471ead through 5466625 implement the new AR API. Essentially, now the AR libraries introduces three classes:
Also, the library supports a trackable event handling method that gets called when a new trackable gets created, to allow for specific initialization steps. All classes are prepended by AR so users can import the AR Core classes (i.e.: Trackable and Anchor) without naming conflicts. The following code demonstrates the entire API:
|
After conversations with @stalgiag, decided to go ahead with this version of the API (the one described in the previous comment and implemented in d471ead, 95adb35, 61a9935, 2f4d446, 10b825b, c96efaa, ee72172, da9b631, 5466625) in the next version of the mode (4.1.0). It can be refined after getting feedback from users, and taking into account consistency with p5.xr once the status of AR in WebXR becomes more clear. |
Version 4.1-beta1 incorporates the AR library developed during GSOC 2018. Even though it is functional, at this moment it only allows using a single anchor from the first available plane detected by AR Core. Some simple API should be added to allow users to select plane/anchor points, etc.
The text was updated successfully, but these errors were encountered: