Comprehensive Features List

This is just a list. Learn how to use all of these features throughout the documentation.

Vizible Presentation Designer Features

Cloud based content creation platform

- Seamless integration across your organization for all assets via the shared cloud asset library

- All assets automatically shared across organization and can be searched for according to name and tag within the cloud asset library

- Completed scenes can be shared between users within an organization

- All changes to scenes saved automatically to the cloud

- Users can collaboratively edit the same scene simultaneously (either on desktop or in VR)

- Save snapshots of scenes to preserve environments exactly how they were experienced in specific instances and for specific events

Drag and drop interface

- Import a huge range of content simply by dragging files into your cloud asset library

- All content automatically uploaded to the cloud, encrypted and converted to VR asset

- Add assets to your virtual scene through a completely visual WYSIWIG world editor

- Snap-to tool allows you to attach an object to a ground plane easily

- Set scale, position and rotation with in-scene transform tools

- Can copy and paste assets within the scene

- Hot key buttons for placing objects on mouse position in scene

- Built in “Stability Gauge” gives you real time feedback for whether a scene will run smoothly in VR as you edit on your desktop

- Set your preferred Presenter and Attendee avatar from number a preset choices

Supported Content Includes

- 3D Models (including scanned or point cloud data).osgb, .fbx, .ive, .obj, .wrl, .dae.dp, .ply

- Videos (2D and Spherical [monoscopic and stereoscopic]).avi, .mpg, .mpeg, .mp4

- Images.bmp, .jpg, .jpeg, .png, .tiff, .tif, .gif

- PDFs (including multi page documents and PowerPoint presentations saved as PDFs).pdf

- Audio (ambient and spatialized).wav, .mp3

Wide range of preset interactivity and built in events

- Add interactivity to assets within your scene simply by right clicking and selecting the desired interactivity from a drop down menu

- Actions can be triggered by sensors set within the scene

- Specific tools and actions can be auto-assigned to users via virtual “stand-ins” to enable actions on cue such as “change viewpoint” or “laser pointer enabled”

Built in interactivity includes

- Grabbable (pick up and inspect objects)

- Actions (all objects can be set-up to spin, scale, move, rotate and fade)

- Animations (set looping, set frame, set speed)Assign tool to attendee or presenter

- Reset users viewpoint (including rotation, height and position)

- Set objects as visible / hidden

- Change frame for pictures, videos and PDFs

- Set alpha

- Copy and paste actions from one item to another

- Toggle gravity

- Set scale rotation and position

- Advanced sensors to trigger events

- Teleport points for scripted navigation of large spaces

- All interactivity can be assigned to a user or triggered to play on cue (i.e. ‘play audio on slide change’)

Slide based narrative format

- Familiar PowerPoint style narrative foundation; scenes are composed of virtual “slides” which progress through a story or presentationLive presenter can navigate through presentation using immersive tools

Easy Meeting Scheduling

- Schedule your VR meeting directly from the Presentation Designer interface

- Generate specific sessions which only your target attendees can accessCopy and send invitation text “GoToMeeting-style”Join Edit Sessions with your colleagues to edit a scene in VR

Advanced Scene Creation Tools

Sensors

- Sensors can be used to create advanced interactivity and complex scenes

- Different sensors (Proximity, Time, and Media) can trigger actions associated with objects or instances in a presentation

- Proximity sensors are triggered by a target object interacting with the sensor, with multiple options for interactivity. For example releasing a specific object within the sensor triggers a specific action such as an apple being placed in a proximity box “turns” it into an orange.Proximity sensors come in a variety of shapes and sizes with multiple options for targets and interactivity.

- Targets determine which object triggers a proximity sensor, ( i.e. avatar hands can be set to trigger an objects movement once they enter a proximity sensor shaped like a button)

- Time sensors can trigger a wide range of actions based on a certain amount of elapsed time

- Media sensors triggers actions based on the conclusion of a piece of media (i.e. video)

- Importantly, all sensors can trigger a slide change (i.e. on video end, change to next slide). This allows the user to design complex, self-guided scenes, with the ability to create logical sequences of interactivity.

Avatar Recording

- A user’s avatar can be recorded interacting within their scene.

- Actions recorded include position, voice, tool interactivity and object interaction.

- Avatar recordings are automatically saved and converted to an asset which can be added to any scene

- An avatar recording can be used within a scene to give a virtual presentation or training seminar to a live attendee.

Parenting

- Objects can be associated and connected to each other within a scene through Parenting

- Parenting can be used to attach specific media to specific models (i.e a video serves as a laptop or smartphone screen)

- Parenting can also be used to simplify editing a complex scene by attaching objects to each other (i.e. a coffee mug is parented to a desk, so when you move the desk, the coffee mug stay on the same location on the desk)

-Some actions will be inherited from the parent object, such as visibility

Set animation properties

Animations built into 3D models can be dynamically edited within Presentation DesignerOptions for animation edits include looping, animation speed and animation frame

Vizible Presenter/Attendee Features

Join VR sessions from HTC Vive, Oculus Touch, desktops, warehouse scale tracking systems and CAVEs

Experience seamless VR collaboration from all major VR headset and enterprise-class systemsMeet with colleagues regardless of VR set-up or on desktopAll virtual assets are downloaded prior to the meeting and rendered locallyVOIP and Positional Data transmitted in real time to enable unparalleled interpersonal interactionAccurately perceive attendees position and attention with face-to-face quality interactivity between all usersUsers have name tags that indicate whether or not they are talking, or if they are muted

Host VR meetings with external parties

Invite VR users from outside your organization to experience your VR pitch or scene with the Attendee ClientNo sign-in required for Attendees - simply send a download link and valid session IDDownload assets before your meeting for quick join-timeDetailed instructions and built in tools for easy set-up

Standardized immersive controls in every meeting

Intuitive two handed and one handed controls for all usersGestural and palette based tool selectionAll attendees and presenter have access to immersive controls from any device

Immersive Tool Set Includes

Presenter (change virtual slides)Grabber (pick up objects)Laser Pointer w/ measuring toolPencil / Drawer (at distance and up close)TV remote (Play videos, flip through multi-page PDFs)Magic carpet (joystick navigation)Teleport (jump to points)Mute and volume controls

Support for Large VR Audiences * Current robust support for 4 simultaneous VR users with larger audience size support coming soon.

Scalable server architecture with planned support for Webinar sized VR audiences*

Collaborative Immersive Editing

- Join edit sessions to adjust and review your scene in VR

- Save out the position of virtually placed objects*

- Communicate with desktop based users over VOIP to coordinate world creation in real time, at scale in immersive VR

Integrated Recording and Data Analytics

Each Vizible Session has the built in ability to derive sophisticated data analytics about the impact of your VR experience on your audience and to gain meaningful, actionable feedback with implications for your wider business*Record Complete Sessions to be played back at a later date*Record user voice and position to capture a training session or VR meeting for future use or analysis*Gaze tracking enables measurement of user attention via tracking objects of interest within a user’s field of view*Visualize data quantitatively or qualitatively*Overlay heat maps and positional data in VR to easily visualize users experience*Export data as a CSV file*

Administrative Tools

- Manage your Vizible organization through a sophisticated web portal

- Extensive documentation for every level of user

- Invite and manage users within your organization

- Track VR scenes and sessions across your organization

- Submit and receive feedback directly with WorldViz engineers

- Live Support chat with WorldViz Support staff, Weekdays, 8 AM – 6 PM PST

- Can connect to active sessions directly from the Vizible web portal under the “Sessions” tab.

Subscription Based Services

- Choose the plan that best suits your needs

- Easily scalable to expand your VR adoption grows

- Advanced service options include hosting your own independent Vizible server

- Plug-in add-ons available for specific use cases and industry and can be managed by the customer

Current Plugin Support

Vizible offers add-ons via our plug-in structure for specific advanced features such as third party device support

Current plug-ins include:

Biopac (physiological data recording)

DICOM model support (medical scans)*

*contact WorldViz to find out more about how we are offering these features.