PDF Links PDF Links PubReader PubReaderePub Link ePub Link

Jeon, Hong, Yi, Chun, Kim, and Choi: Interactive Authoring Tool for Mobile Augmented Reality Content


As mobile augmented reality technologies are spreading these days, many users want to produce augmented reality (AR) contents what they need by themselves. To keep pace with such needs, we have developed a mobile AR contents builder (hereafter referred to as MARB) that enables the user to easily connect a natural marker and a virtual object with various interaction events that are used to manipulate the virtual object in a mobile environment so that users can simply produce an AR content using natural photos and virtual objects that they select. MARB consists of five major modules—target manger, virtual object manager, AR accessory manager, AR content manager, and AR viewer. The target manager, virtual object manager and AR accessory manager register and manage natural target markers, various virtual objects and content accessories (such as various decorating images), respectively. The AR content manger defines a connection between a target and a virtual object with enabling various interactions for the desired functions such as translation/rotation/scaling of the virtual object, playing of a music, etc. AR viewer augments various virtual objects (such as 2D images, 3D models and video clips) on the pertinent target. MARB has been developed in a mobile application (app) format in order to create AR contents simply using mobile smart devices without switching to a PC environment for authoring the content. In this paper, we present the detail organizations and applications of MARB. It is expected that MARB will enable ordinary users to produce diverse mobile AR contents for various purposes with ease and contribute to expanding the mobile AR market based on spread of a variety of AR contents.

1. Introduction

Augmented reality (AR) is a technology that combines reality and virtual objects [1]. This technology enables users to recognize invisible information (i.e. virtual objects or environmental information) more easily by combining information with the real world [2]. AR makes the user to interact with a virtual object or information by more intuitive ways and to get a quick response. This capacity for an immediate response makes users to be more immersive to the content itself. In these days, the AR market is showing a continuous growth trend and AR is regarded as a promising technology [3,4].
In order to implement AR technologies, various types of devices are required, such as input devices for filming the physical world and getting the information of the real environment, sensing devices for offering a variety of intuitive interactions, and processors for rapid computation, etc. [57]. In previous AR systems, these equipment were separately prepared and assembled as a system. Now, however, as smart devices with high performance equipment (e.g., camera, high-speed processor, accelerometer, gyroscope, GPS, etc.) have been rapidly diffused, ordinary users can experience AR using only one smart device without any complex equipment. Furthermore, as the Software Developer Kit (SDK) and open library [813] for AR are provided through internet public sites, AR contents can be developed more easily and various types of AR contents have been released in the mobile application (app) market.
Even though the environment for AR has matured as such recently, AR apps have not been used continuously and many AR apps have disappeared in a short time. The main reason is that most of AR apps utilize static contents, that is, limited markers and virtual objects designated by the app developer and updating contents is not easy. In most cases, users cannot change targets and virtual objects used in AR apps by themselves. The limitation to update AR contents has been an obstacle to utilizing AR apps. To overcome this limitation, AR authoring tools that allows user to easily update target markers and virtual objects are required. In this paper, we introduce an interactive authoring tool, MARB (mobile AR contents builder) that enables the users to easily manage target markers and virtual objects with various interaction events for manipulating the virtual objects.
This paper is composed as follows: Section 2 analyzes previous mobile AR contents and authoring tools. Section 3 introduces the primary components of MARB and addresses the organization of database for making AR contents to be updated easily. Section 4 describes various interaction events and interfaces of MARB, and compares the MARB with existing AR content builders. Finally, Section 5 concludes the paper with a future work.

2. Mobile AR Contents Analysis

We have analyzed mobile AR contents that have been downloaded more than 10,000 times from Google store on August 2016. These mobile AR contents are broadly divided into graphics-oriented contents using target markers and information-oriented contents based on a user’s location. Graphics-oriented contents using target markers are designed to augment a virtual object on a proper natural target such as a photo of a real object, whereas location information-based contents are designed to provide the location-based service information according to user’s location by using GPS data.
Fig. 1 classifies the collected mobile AR contents by augmented subjects. About 80% of the collected mobile AR contents were graphics-oriented contents using target markers that recognize a target and display a virtual object on the recognized target. The remaining 20% of the contents were location information-oriented contents that provide service information based on the location data obtained from GPS. Most of the virtual objects used by the marker-based mobile AR were 3D models, followed by 2D images (20%) and videos (9%). Fig. 2 shows domains of mobile AR contents. Most of education contents and books using mobile AR which account for the first and second largest shares, were for infants and children.
Fig. 3 depicts how many contents allowed dynamic interaction and which kinds of interaction were used in mobile AR contents. As shown in Fig. 3, it has been observed that about 70% of the surveyed mobile AR contents included an interactive event. The most frequently used interaction was tapping, and 61% of the mobile AR contents using an interactive event were found to use a tapping method. The tapping event was mainly used to select a virtual object. When the virtual object was touched, the shape of the virtual object was changed or the detailed information on the touched object was displayed. The next most widely used interaction event was panning, in other words, dragging over the screen. When a user dragged a virtual object on the screen, the location of the virtual object was changed or rotated. Other commonly used interaction included pinching and spreading. These interactions were generally used to make the virtual objects to become bigger or smaller.
In the past, GPS-based mobile AR contents dominated the AR market [10], but in recent years, vision-based AR contents using a mobile camera have been explosively increased. However, in most cases, users are not allowed to select a target marker, virtual object or interaction event freely; and only limited numbers of markers, virtual objects and events specified by the developer are provided. In this paper, we introduce an interactive authoring tool for AR contents that enables the users to easily manage target markers and virtual objects with various interaction events. First, in the next section, we will explain the primary components of MARB and the organization of database in detail.

3. Design of Authoring Tool for Mobile AR Content

3.1 System Overview

Various easy-to-use maintenance functions are required so as to enable inexperienced developers or general users to produce high-quality mobile AR contents with ease, and to update them freely. The proposed system includes convenient maintenance functions as such which enable users to select a web image or a camera image that can be easily obtained from the mobile device as a marker, and to use any 2D images, 3D models or video clips as virtual objects that are mapped to markers. Moreover, users have a wide choice of interaction methods, so that various interaction events can be designated for each marker differently.
Fig. 4 shows the system configuration diagram of MARB. In the proposed system, users can create any pair of markers and virtual objects stored in the database using the AR content manager and see augmented virtual objects on the matched marker through the AR viewer of the proposed system. The AR information set by the user through AR content manager is saved in a database file of the mobile device. This database information is loaded when the AR viewer is activated, and the user-specified AR contents are displayed on the mobile device based on this information. In the next subsections, we will explain the primary functions of MARB in order that users use MARB to create AR contents.

3.2 Functional Design

MARB is composed of six function managers—database manager, which creates and modifies a database; target manager, which registers and deletes target markers; virtual object manager, which manages the registration and deletion of virtual objects; AR accessory manager, which registers the AR accessories; AR content manager, which modifies and deletes the AR information with defining interaction type; and AR viewer, which augments the user-defined AR information on the mobile device screen.

3.2.1 Target manager

Target manager manages markers and includes a marker table that stores the marker information. To register a marker, a photographed image or an image in the mobile gallery can be selected as a marker. The selected image will be registered both in the marker table of the mobile device and Vuforia Cloud Database. At this time, the level of appropriateness of the marker registered in the cloud is assessed, and the result of the assessment is indicated as a star score on a scale of 0 to 5. A good marker is an image that is not symmetrical vertically or horizontally, and includes high contrast. As shown in Fig. 5, more characteristics can be observed in the highly evaluated marker (Fig. 5(b)). Fig. 5(a) shows an image that cannot be used as a target marker. As the evaluation work requires some time, Target Manager updates the evaluation information for the registered image by the message-driven method when the evaluation is completed. To delete a marker, the marker information is removed both in the marker table of the mobile device and Vuforia Cloud Database simultaneously.
The marker database table stores information on the marker selected by the user, such as the unique key of each marker, the marker name, and server access keys needed to register and delete a marker in the cloud database. The details of the marker suitability evaluation conducted by the cloud database, then the assessment result of the selected image is also saved in the marker table.

3.2.2 Virtual object manager

Virtual object manager saves and deletes virtual objects, and sets up a virtual object table that keeps the information of each virtual object. The registered virtual object information (e.g., unique key of the virtual object, virtual object type that differentiates 2D images, 3D models, and videos, and full path of the object file) is saved in the virtual object table. For 3D models, users can select a thumbnail image of the 3D model that is used to display the 3D model list.

3.2.3 AR content manager

AR content manager registers, modifies and deletes AR information, that is, a pair of a target and a virtual object. To register AR information, firstly, a user selects a target marker that has been registered in advance, as shown in Fig. 6. Then, a virtual object and interaction events are selected to compose AR information. At this time, the user can add accessary images for decorating a 2D image and enable various interaction events such as pinching and spreading for scaling of an object, tapping for playing a music, etc. Different interactions can be specified for one virtual object of AR information, so that several different events can occur on a virtual object for activating different functions.

3.2.4 AR viewer

AR viewer reads the AR information registered by the user through AR content manager and starts a marker search using a camera. AR viewer augments a virtual object according to the saved AR information and provides the registered interaction events if a marker is detected. At this time, separate rendering technologies are required to display the virtual object on the screen, depending on the type of each virtual object (2D image, 3D model, or video clip). Fig. 7 shows the flow chart for rendering the registered virtual object on the matched marker by AR viewer. In addition, users can film the augmented scene as a video, which can be shared using SNS.

3.3 Database Design

The database has three main tables, i.e. marker table, virtual object table, and AR information table. Table 1 depicts the schema of the marker table which contains information about the markers. It is composed of the following eight columns: Marker ID (marker’s unique key); Image Name (name of the marker to be registered); Image File Path (location of the marker image file); Status of Marker registered in cloud database (level 0, pending; 1, active; 2, failure); Star Score, which indicates a marker’s suitability as evaluated by the cloud database (recognition rate: ‘0’ inappropriate to ‘5’ very appropriate); Server Access Key, which gives the right to access the cloud database; Server Secret Key, which grants the right to modify the cloud database; and Target Key, which is assigned to the target registered in the server.
Table 2 shows the schema of the virtual objects table which stores the virtual object information selected by the user. The table is composed of the following three columns: Object ID (unique key of virtual objects), Object Type (1, video; 2, 2D image; 3, 3D model), and Object File Name (name of the virtual object).
Table 3 shows the AR information table which contains the pairs of markers and virtual objects with enabled interaction events. The table is composed of eighteen columns. The MARB ID is the unique key of the AR information, and the Marker ID and the Object ID have the unique keys of the marker and the virtual object which are defined as a pair, respectively. The AR information table refers to the marker and AR table information, using the marker ID and object ID. The remaining columns are the information column about interaction events. The use status of each event is determined by 0 (not used) or 1 (used). The “Tap” and “Double Tap” columns can be used when selecting a sound play event or a virtual object conversion event, respectively. The full path of the sound file selected by the user and the ID of the virtual object to convert are saved in “Tap Sound File” and “Double Tab Change Object ID” columns, respectively. If a 2D virtual object is selected, the color and message of the text entered by the user, and the location of the saved text file, are saved together.

4. Implementation of MARB

4.1 Interaction and Interface

4.1.1 Interaction and decoration

The proposed system was implemented in Android Studio as an Android app which can be executed in any Android devices. MARB provides seven interaction types and actions matching to them as shown in Table 4. MARB uses the touch screen interaction (tap, double tap, two finger tap, pinch, spread, and pan) and shake interaction that was implemented using an acceleration sensor. The tapping event plays or stops playing the music designated by the user. If a video-type virtual object is selected, the selected video will be played or stopped as shown in Fig. 8.
The double tapping interaction replaces the object with another virtual object specified by the user. That is, a target can be matched to two virtual objects and two objects are displayed in rotation by double tapping. Fig. 9 shows how to change a virtual object using double tapping. Two-finger tapping rotates a virtual object to the marker direction and horizontal direction by 90°. Fig. 10 shows the virtual object that has been rotated by the two-finger tapping. As shown in Fig. 11, the shaking interaction hides a virtual object from the screen. If the shaking interaction event is executed again, the hidden virtual object appears again.
The pinching scales the virtual object up, whereas the spreading scales it down as shown in Fig. 12. The panning event moves the virtual object. Fig. 13 shows the virtual object that was moved by the panning interaction. The decoration can be selected with a 2D virtual object. As shown in Fig. 14, there are three kinds of decorations−frames, effects, and user-defined message. The frame is used to decorate the border of an image and provide moving image effects (floral leaf, firecracker). When selecting a font and a color and inputting a message, the user can increase/decrease the size of the message or move the location on the screen shown in Fig. 15. The selected decorations are displayed with 2D virtual image together as in Fig. 15(d).

4.1.2 Interface

The initialization module reads the database and checks the version. It creates a database if the database file does not exist in the mobile device. The target manager enables a user to select an image from the photo album or to take a picture using camera immediately as shown in Fig. 16(a). The selected image is sent to the cloud server so as to check the adequacy as a target (Fig. 16(b)). The marker in red color in Fig. 16(c) represents that server evaluation has not finished. The red color disappears and a star score is displayed when evaluation is completed as shown in Fig. 16(d). Markers with low scores are blocked in black color as Fig. 16(a)–(d). The marker information is saved both in the marker table of the mobile device and in the cloud database of the server.
Fig. 17 presents the interface and the procedure for adding a 2D image as virtual object in Virtual Object Manager. Fig. 17(a) shows three types of virtual objects−2D image, 3D model, and video clips. If a user wants to add a 2D image as a virtual object, he/she has to touch “add” button for 2D image to open the photo gallery as Fig. 17(b). After an image is selected from the photo gallery, the user should name the image and register it as a virtual object as shown in Fig. 17(c). Fig. 17(d) shows the virtual object list modified by adding a 2D image. Fig. 18 shows the interface and procedure for adding a 3D model to the virtual object list. The procedure of adding a 3D model to the virtual object list is the same to that of adding a 2D image. A 3D model is selected from the device storage using the file system manager. If a 3D model is selected, the thumbnail image for the model is automatically generated as shown in Fig. 18(c).
The AR Contents Manager shows the markers defined by the user and virtual object thumbnail with available interaction events. The information can be edited by touching the object. AR information registration is performed in order of marker selection, virtual object selection, and interaction event selection. Markers with poor star scores or markers that have not completed evaluation cannot be selected. If a 2D image type is selected for a virtual object, the decorating images can be selected additionally. When an interaction event for the virtual object is selected, the AR Contents information is saved in the AR Contents table. The interfaces of the pertinent registration are shown in Fig. 19. MARB can save the AR scene rendered by AR viewer into a video clip and share recorded video using SNS. Fig. 20 illustrates the process of sharing recorded video using SNS. If a user touches “record” in the menu of AR viewing page as Fig. 20(a), recording is started. Recording is finished when the camera button is touched as shown in Fig. 20(b) and confirmation for sharing is processed as Fig. 20(c). If the user confirms sharing, the recorded video clip is sending using SNS. Fig. 20(d) shows the video clip shared using SNS.

4.2 Comparative Analysis of AR Builder Tools

Table 5 compares the functions of AR builders, Circus AR Creator, GorealAR Browser and Augment 3D AR, which include the function of creating marker-based mobile AR contents with those of MARB proposed in this paper. The “Operating environment” column shows the environment that can start up the AR builder. Circus AR Creator and GorealAR Browser are running on the web, and authoring is conducted in a web environment, but augmentation is conducted in a mobile environment using separate mobile AR contents. On the other hand, Augment 3D AR and MARB conduct both authoring and augmentation in a mobile environment. The “Virtual object” column shows the type of virtual objects augmented by builders. Circus AR Creator and MARB support various virtual objects, such as video, 2D image, and 3D model. The “Providing user-defined interaction events” column shows whether the user can specify an interaction event for an augmented object when creating AR contents. Circus AR Creator and MARB allow the designation of an interaction event. Note, however, that MARB provides more diverse interaction events than Circus AR Creator, such as the event using an acceleration sensor, double tapping, and two-finger tapping. “Video recording” is a function of saving the screen that is currently augmented as a video clip. Only MARB supports video recording. “SNS sharing” indicates whether the information created by the AR builder can be shared using SNS. Circus AR Creator, Augment 3D AR, and MARB support SNS sharing.
The comparison result shows that MARB provides more AR contents authoring functions than other AR builders. In other words, users can utilize the tool in various ways by creating AR contents conveniently and with initiative.

5. Conclusions

This paper introduced the design and implementation details of MARB, which is a mobile AR contents authoring tool that enables users to produce AR content by themselves conveniently by an intuitive way. The details of MARB were designed such that 1) a mobile AR authoring tool is created for the user to maintain contents with ease, 2) the range of virtual object choice is increased so that virtual objects needed by general users can be registered, 3) individual and diverse interactions are provided by allowing each marker to set a different event.
MARB implemented with these purpose manages each marker, virtual object, AR information, interaction for each virtual object. The primary components of MARB are Target Manager, Virtual Object Manager, AR Content Manager and AR Viewer. The user-defined AR information are defined and recorded in the database saved at the mobile device and in cloud database. The virtual object and interaction event specified by the user act on the pertinent marker depending on the information set in AR Viewer, and the action can be recorded and shared using SNS. MARB is expected to enable users to utilize mobile AR contents in various ways according to users’ needs because even the general user can create mobile contents with ease. It is expected that MARB will make a contribution to the mobile AR market expansion based on spread of a variety of AR contents.
The implemented MARB has some limits, i.e., target recognition is slow and only one target can be recognized at a time. Solving these issues requires additional functional design and implementation that can register and manage a device target actively in a real-time environment. As a future work, we will modify the configuration of MARB for multiple target recognition and design various AR services based on MARB.


This research was supported by 2014 KGIT X-Project and 2016 service project of Nuribom Co. Ltd.


Jiyoung Jeon
She is a researcher in VCAR Lab, Seoul Media Institute of Technology (SMIT), Korea. She received B.S. degree in School of Computer Science and Engineering from Duksung Woman’s University in 2012, and M.S. degree in Newmedia Engineering from Seoul Media Institute of Technology (SMIT) in 2015, respectively. Her research interests include augmented reality, mobile computing.


Min Hong
He is a professor at the Department of Computer Software Engineering, Soonchunhyang University in Asan, Korea. He received B.S. in Computer Science from Soonchunhyang University in 1995. He also received M.S. in Computer Science and Ph.D. in Bioinformatics from the University of Colorado in 2001 and 2005, respectively. His research interests are in computer graphics, mobile computing, physically-based modeling and simulation, bioinformatics applications, and uhealthcare applications. In present, he is a director of ICT Convergence Rehabilitation Engineering Research Center and Computer Graphics Laboratory at Soonchunhyang University.


Manhui Yi
He is a CTO in NURIBOM, software company, Korea. He received his B.A. and M.S. degrees in Aviation Electronics from Korea Aerospace University in 1991 and 1993, respectively. His research interests include image processing, computer vision, AR/VR and HCI.


Jiyoon Chun
She is a professor in Department of Newmedia at Seoul Media Institute of Technology (SMIT), Korea. She received the M.F.A. degree from School of Visual Arts in 2002 and the Ph.D. degree in Art Technology from Sogang University in 2014. Her current research interests include design concept, media design, art & design application, augmented reality content, mobile art and interactive media art.


Ji Sim Kim
She is a professor in Liberal Arts College at Gachon University, Korea. She received her M.S. in Computer Science from Ewha Womans University in 1999, Ph.D. degrees in Educational Technology from Ewha Womans University in 2009. She was a researcher at Teaching and Learning Center of Konkuk University in Korea between 2009 and 2010 and a manager at Korean Productivity Center. Her research interests include mobile learning and augmented reality.


She is a professor in Department of Newmedia at Seoul Media Institute of Technology (SMIT), Korea. She received her M.S. and Ph.D. degrees in Computer Science from Ewha Womans University in 1991 and 2005, repectively. She was a researcher at R&D Department of KCI Co. and POSDATA Co. in Korea between 1991 and 1999. Her research interests include image processing, computer vision, computer graphics, augmented reality and human-computer interaction.


1. W. Piekarski, and BH. Thomax, "Tinmith-metro: new outdoor techniques for creating city models with an augmented reality wearable computer," in Proceedings of 5th International Symposium on Wearable Computers, Zurich, Switzerland, 2001, pp. 31-38.
2. HD. Yang, CY. Lee, and SW. Hwang, "A study on the strategy to activate the mobile augmented reality market through business model analysis," Internet and Information Security, vol. 1, no. 1, pp. 5-27, 2010.

3. Korea Creative Content Agency, Culture technology depth reports: mobile AR technology and industry trends; 2010, [Online]. Available: https://www.kocca.kr/knowledge/publication/ct/__icsFiles/afieldfile/2010/10/13/lTX6IZfuEeJN.pdf.

4. Korea Technology and Information Promotion Agency for SME, Augmented Reality contents; 2014, Available: http://smroadmap.smtech.go.kr/0201/view/m_code/K31/id/1518/idx/1038.

5. HS. Jeon, "Mobile augmented reality," Week Technology Trends, vol. 1447, pp. 25-37, 2010.

6. DC. Kim, JW. Lee, and WT. Woo, "Augmented Reality 2.0 technology and content application technology: status & prospect," Information and Communications Magazine, vol. 28, no. 6, pp. 54-60, 2011.

7. HM. Lee, DC. Kim, and WT. Woo, "The next generation of augmented reality interface technology and prospects for virtual object manipulation," Korea Information Processing Society Review, vol. 17, no. 5, pp. 60-66, 2010.

8. Vuforia Developer [Online]; Available: https://developer.vuforia.com.

9. ARToolKit [Online]; Available: www.hitl.washington.edu/artoolkit/.

10. AndAR [Online]; Available: https://code.google.com/p/andar/.

11. JY. Jeon, JY. Chun, M. Hong, HS. Yum, YH. Choi, and YJ. Choi, "Design and implementation of interactive authoring tool for mobile augmented reality content," Journal of Internet Computing and Services, vol. 16, no. 4, pp. 25-37, 2015.
12. Circus AR Creator [Online]; Available: http://www.circusar.com/wp/?page_id=4220.

13. GorealAR browser [Online]; Available: http://gorealar.com/products_gorealar_browser.html.

Fig. 1
Classification of mobile AR contents based on augmented subjects.
Fig. 2
Classification of domains of mobile AR contents.
Fig. 3
Interactive event usage status and type of interaction used in mobile AR contents.
Fig. 4
System architecture of MARB.
Fig. 5
Feature points of each target image marked in yellow. (a) ‘0’ level target and (b) ‘5’ level target.
Fig. 6
AR contents registration procedure.
Fig. 7
Procedure of AR rendering conducted by AR viewer.
Fig. 8
A video clip augmented on the target and played using the tapping. Before tapping (a) and after tapping (b–c).
Fig. 9
Two virtual objects matched to a marker and changed in rotation by double tapping. 3D models matched to a tree target (a) and 2D images (b) with a decoration frame matched to one target.
Fig. 10
A 3D model (a) and a video clip (b) rotated by two-finger tapping.
Fig. 11
Hiding a virtual object by shaking interaction.
Fig. 12
Scaling the virtual object up and down. Initial virtual object (a), scaling down by pinching (b) and scaling up by spreading (c).
Fig. 13
Moving a virtual object by panning interaction.
Fig. 14
Decoration selection screen. Decorating text properties and frames (a), animation effects (b).
Fig. 15
Decoration of a 2D virtual image. Decoration using a frame (a) and user input text (b–c). A decorated 2D image displayed on a target (d).
Fig. 16
Interface for marker registration. (a) Menu for adding a marker, (b) interface for naming and registering an image as a marker, (c) a marker candidate under evaluation represented in red, (d) the marker list modified by adding a new marker.
Fig. 17
Interface for adding a 2D image as a virtual object. (a) Three types of virtual objects (2D image, 3D model, and video clips), (b) photo gallery for selecting a photo, (c) interface for naming a 2D image and registering it as a target, (d) virtual object list modified by adding a 2D image.
Fig. 18
Interface for adding a 3D model as a virtual object. (a) Three types of virtual objects (2D image, 3D model, and video clips), (b) interface for naming a 3D model and registering it as a target, (c) thumbnail image automatically generated after a model is selected, (d) virtual object list modified by adding a 3D model.
Fig. 19
Interface for registering AR Contents information. (a) AR information list−pairs of a marker and a virtual object, (b) interface for marker selection, (c) interface for virtual object selection, (d) interface for decoration selection, (e) Interface for editing of decorating text, (f) interface for interaction selection, (g) Naming and registering a pair of a marker and a virtual object, (h) AR information list modified by adding a new pair of a marker and a virtual object with interaction events.
Fig. 20
Sharing of AR rendering results using SNS. (a) Start of recording, (b) end of recording, (c) start of sharing, (d) the video clip shared using SNS.
Table 1
Marker table
Marker ID Marker Name Marker Image File Path Level Recognition Rate Server Access Key Server Secret Key Server Target Key
Data type Integer Integer String Integer Integer String String String
Field value Marker table primary key Marker file name Marker file path Server approval
0: pending
1: active
2: failure
Target recognition evaluation (‘0’ bad to ‘5’ suited) Server database access key Server database secret key Server target ID
Table 2
Virtual object table
Object ID Object Type Object File Name
Data type Integer Integer String
Field value Object table primary key Virtual object type
 1: video
 2: 2D image
 3: 3D model
Virtual object file name
Table 3
AR contents information table
AR Contents ID Marker ID Object ID Tap Tap Sound File Double Tap Double Tap Change Object ID Shake Pinch
Data type Integer Integer Integer Integer String Integer Integer Integer Integer
Field value AR contents table primary key Marker table primary key Object table primary key Using event
0: not used
1: used
Sound file path Using event
0: not used
1: used
Object table primary key Using event
0: not used
1: used
Using event
0: not used
1: used
Spread Pan Two Finger Tap Text Sentence Text File Path Text Color Text Font Frame ID Effect ID
Data type Integer Integer Integer Integer String Integer Integer Integer Integer
Field value Using event
0: not used
1: used
Using event
0: not used
1: used
Using event
0: not used
1: used
Text sentence Text image file path Text color index Text font index Frame index Effect index
Table 4
Actions executed by interaction event
Interaction type Actions
Tap Play video clip or sound
Double Tap Change the virtual object to another one
Shake Remove the virtual object
Pinch Scale the virtual object down
Spread Scale the virtual object up
Pan Move the virtual object
Two Finger Tap Rotate the virtual object
Table 5
Comparison of AR Builder functions
AR builder tool Operating environment Virtual object Providing user-defined interaction events Video recording SNS sharing
Circus AR Creator [11] Web Video
2D image
3D model
GorealAR browser [12] Web Video
3D model
Augment 3D AR Mobile 3D model X X O
MARB Mobile Video
2D image
3D model