Extended Reality

From New Media Business Blog

Jump to: navigation, search


Extended Reality

Extended reality (XR) is a universal term for any technology that alters our environment, experience, and interaction with the real world by adding digital elements to the physical or virtual simulation environment. It includes any technology within the term, but not limited to augmented reality (AR), mixed reality (MR), and virtual reality (VR). [1]

Figure 1. AR, VR, and MR [2]

Augmented Reality (AR)

Augmented Reality technology expands on the real world by implanting informatively-designed virtual information and objects that provide additional data about the world around us. This experience enriches the user’s perception while still providing a live view of the world around us. [3]

How Does Augmented Reality Work?

Augmented reality projections can be displayed on various devices such as glasses, smartphones, and head-mounted-display units. For these computer-generated projections to appear accurately on top of a real-world view, the device first needs to capture the surrounding position and orientation through depth sensors, light sensors, gyroscopes, and/or cameras. The software will then record the data to determine the distance to the objects, speed of the motion, direction, and angle, and overall orientation. The data is then processed to show AR animation on the device in real-time and relevant location. [3]

Types of Augmented Reality


Uses camera and sensors to continuously scan the environment and put a specific physical image pattern marker to display virtual 3D objects, text, or animation on top of real-world objects. This means that virtual 3D objects will only be shown properly if the camera can recognize the image pattern.[4]


Markerless AR places virtual 3D objects in a real-world environment by examining features present in real-time. Unlike marker-based AR, markerless AR does not require cameras to put a mark on a specific physical image pattern marker. Markerless AR image analysis uses Simultaneous Localization and Mapping technology (SLAM) to continuously scan the environment and create appropriate maps to position the virtual 3D object. [4]


Eliminates the need for computers or screens as virtual 3D objects are delivered through a stationary context onto a user’s physical space. Users can freely move around within a specified area and allow for interactions between virtual 3D objects through sensors that detect movement. By projecting artificial light onto real flat surfaces, virtual 3D objects can show an illusion of depth with accurate position and orientation. [4]


Display virtual 3D objects by reading real-time data from sensors with the user's present location. By using existing features in smartphones such as camera, GPS, compass, and accelerometer, location-based AR does not require any image or object to prompt. [4]

Real-World Applications of Augmented Reality


Pokemon Go, an AR mobile game released in 2016 has made more than $6 billion in revenue by 2020. By examining the user’s location and surroundings in real-time using the camera, GPS, and accelerometers in our smartphones, virtual 3D characters can be superimposed into our smartphone displays. [5]

Figure 2. Pokemon Go [6]

Real Estate

Furniture retailer, IKEA have developed IKEA Studio which allows users to utilize their camera, light detection, and range sensors in their smartphones to capture 3D plans of the user’s home to virtually design it. Instead of users having to visit a furniture store, the app allows us to virtually impose a piece of furniture over our surroundings so that we can try out if the new furniture would fit. [3]

Social Media

By utilizing cameras and sensors built within our smartphones, social media applications such as Snapchat and Instagram are capable of using AI to support AR. By detecting the user’s face using AI and putting a specific image pattern marker, the filter can be accurately overlaid on top of the face which gives a very realistic look. [3]


AR applications can also be easily used by tourists to provide real-time data on direction, location, and destination. Along the way, AR applications can give more in-depth information including speed, object detection, and assist in advanced navigation. [5]

Virtual Reality (VR)

Virtual Reality replaces the real world almost entirely, by creating unique, interactive virtual spaces for users to step into with the help of specially-created hardware, including headsets, controllers and even environmental feedback. As humans use their 5 senses to perceive reality, VR technology also uses the same concept to stimulate those senses and immerse users in a simulated environment. [1]

How Does Virtual Reality Work?

Virtual reality uses computer technology to create simulated 3D environments where users can interact and explore more than what is capable of from phones and monitors. To allow for interaction between 3D objects, users require the use of VR headsets and/or controllers containing gyroscopic sensors, cameras, and accelerometers, to provide inputs and experience outputs. VR headsets are able to do this by using computer vision and advanced graphics to generate 3D images/video by adding depth, and by reconstructing the scale and distances between static 2D images.

When developing VR hardware, VR developers must take into consideration several crucial factors including field of view and frame rate, spatial audio, and head tracking. As the human eye is capable of handling up to 1000 frames per second and spans surrounding vision up to 220º, VR screens need to support between at least 60 to 120 fps and 180º to avoid feelings of disorientation and nausea. VR audio experience also needs to incorporate spatial audio or 360º sound to realistically simulate any unique audio environment and experience to support better realism. Finally, the head and position tracking feature will allow users to know what directions they’re facing to allow complete autonomous movement through real and virtual space. [7]

Figure 3. Organization of VR [8]

Types of Virtual Reality


Non-immersive VR technology features a computer-generated virtual environment where the user simultaneously remains aware and still controlled by their physical environment. Users are able to control certain activities or avatars through a computer which allows for interaction with the virtual environment but not directly interacting with users. Video games are a prime example of non-immersive VR. [9]


a combination of non-immersive and fully immersive VR. By using VR headsets or screens, users can move and explore within the virtual environment but only within a limited capacity, such as the visual experience but without any physical sensation on the user. Semi-immersive VR is now more common in the real-estate business via virtual tours which provide a better interactive visual experience. [9]


Ensures the most realistic virtual experience by attempting to immerse all user’s sensors in a simulated environment. Users will feel as if they are visually and physically present in the virtual world and events occurring can be felt by the users as well. Through the use of special hardware including VR headsets, gloves, and body connectors equipped with sensors, any inputs or movements recorded by the sensors will be projected into the virtual world in real-time to provide the best simulation experience. [9]

Real-World Applications of Virtual Reality

Gaming & Entertainment

According to the Entertainment Software Association, 73 percent of the 169 million gamers in the U.S. reported owning a gaming console and 29 percent said they had a VR-capable system. With the advancements of VR, gaming allows for an even more immersive experience by choosing our own POV. [10]

Workplace Collaboration

By using a VR headset, employee training can easily be done both onsite and at home. The ability to place a user in complex decision-making situations and have hands-on experience in real-time is very beneficial. Moreover, multiple users can simulate a meeting room combined with motion tracking to allow better communication and gestures. [11]


Special needs students can also benefit greatly from the use of VR technology. As traditional learning via textbook may sometime be ineffective, VR provides a more responsive and engaging way of learning which aids in faster learning. As technology progresses and our attention spans decrease, we will continue to expect well-rounded experiences when learning anything new. [11]

Military Training

VR can also help simulate realistic battlefield situations for training soldiers on how to respond efficiently without putting them in danger. simulations for combat, confrontations, and the like can be replaced for expensive and sometimes dangerous real-life exercises. The ability to change scenarios makes it attractive for all branches of the military and the defence industry. [11]


In the healthcare industry, VR has already been incorporated for training on surgical procedures. This provides an opportunity for doctors without exposing the lives of patients to danger. Doctors can also have better clarity on the patient’s condition by using VR to re-liven experiences, especially for patients with PTSD. [10]

Retail & Advertisement

Retailers can help potential customers put themselves in situations where customers can “try on” clothes or objects and get a sense of how they interact with an environment. For example, a bride-to-be could try a wedding dress and place it in an actual wedding environment. Store owners can also offer customers a virtual 3D, 360º e-commerce experience where customers can browse, examine, and purchase items online. [10]

Mixed Reality (MR)

Mixed Reality blends real and virtual worlds to create complex environments where physical and digital elements can interact in real-time. Users can experience MR environments through a headset, phone or tablet, and can interact with digital objects by moving them around or placing them in the physical world. [12]

How Does Mixed Reality Work?

Mixed reality allows users to interact and manipulate virtual 3D objects through high-level imaging and sensing technologies. Along with advancements in processing power, computers are now capable of capturing advanced environmental inputs including a person’s body position in the real world, objects, and surfaces surrounding the user, and ambient lighting and sound. As we move through the real world, our actions will be mapped into a digital reality allowing us to experience both the real and virtual world simultaneously. This meant that the placement and positioning of 3D objects are crucial in keeping users present in a real-world environment during the experience. Allowing users to easily navigate between the real world and virtual 3D objects. [12]

Real-World Applications of Mixed Reality


With MR, an inspection worker could visually retrieve information about energy consumption and see how individual components of a device are working through a headset. Teams in manufacturing can also create virtual instructions and cues that others can follow. For example, a trainee can learn how to manufacture an engine efficiently without the risk of danger. Mixed reality also supports devices, like helmets, headsets, and glasses that are IoT-enabled, so they can interact with other devices, retrieve data through their sensors, and distribute the data collected through a network. These devices empower workers to make better predictions and timely decisions. [12]


In October 2018, Microsoft and NASA’s jet propulsion laboratory are able to create OnSight, a mixed reality software that allows users to navigate and explore mars virtually by taking pictures from the Curiosity Rover and turning it into a 3D landscape. In addition to studying Mars’ geology, InSight was developed to allow remotely located scientists to walk around meet and communicate with each other in real-time. [12]


Airbus, an aircraft manufacturer, collaborated with Microsoft to test how Microsoft’s HoloLens 2 can assist with real-time experiential learning and digital manufacturing. Through mixed reality, a detailed step-by-step 3D model guide allows staff to learn more efficiently which helps cut assembly time and cost, instead of risking faulty assembly or material. [12]


Some MR apps can now help students learn through a virtual 3D landscape. Cleveland Clinic and Case Western Reserve University developed HoloAnatomy, a HoloLens application that allows instructors to teach medical anatomy through simulations for students remotely. [12]

Remote Work

Remote teams and workers can collaborate and communicate in real-time through MR. Users can also bypass language barriers by using translation applications that translate language in real-time. [12]


Medical professionals can also incorporate MR to view and share patient records in a more interactive form. This makes analyzing remote patients more realistically instead of using a keyboard and mouse. [12]

History of Extended Reality

Early 1800s

Stereoscopic Photos and Viewers

In 1838, scientist Sir Charles Wheatstone outlined the concept of “stereopsis” or “binocular vision”. This research demonstrated how the brain combines two 2D images (one eye viewing each) of the same object to make the image appear to have a sense of depth and 3D immersion. This led to the development of the first stereoscopes, devices that uses a pair of mirrors at 45-degree angles to the user’s eye to turn them into a 3D image with the illusion of depth. The design principles of stereoscopic displays are still used in today’s VR systems to bring a sense of depth to digital images – enhancing the feeling of immersion. [13]

Early 1900s

Science Fiction Story Abour VR (1935)

In 1935, Stanley G. Weinbaum, a science fiction writer published a book named Pygmalion's Spectacles. The book features a main character that uses a pair of goggles to let the wearer experience a fictional world through virtual smell, taste, and touch. This illustration similarly describes today’s VR experience through the use of a head-mounted unit. [14]

Figure 4. Pygmallion's Spectacles [15]


Sensorama, The First VR Machine

In 1956, cinematographer Morton Heiligh created Sensorama, an arcade-style theatre cabinet for up to 4 people that offers an interactive experience to stimulate all human senses. A combination of full 3D colour video, audio speakers, vibrating chairs, and oscillating fans for smell effects was able to fully immerse the individuals into the film. Heilig would then patent the first head-mounted display unit called the Telesphere Mask which provided stereoscopic 3D images with wide vision and stereo sound by 1960. Although not equipped with motion tracking at that time. [14]

Figure 5. Sensorama [16]

Sword of Damocles, The First AR Head-Mounted Display Unit

In 1968, Ivan Sutherland, a Harvard professor and his student, Bob Sproull created the first AR head-mounted display unit, named The Sword of Damocles. The head-mounted display was connected to a computer rather than a camera and it was able to display computer-generated virtual 3D graphics and wire-frame shapes. However, it was never developed beyond a lab project as it was too heavy and restricting for users to comfortably wear. [14]


In 1969, Myron Kruegere, a computer artist developed a computer-generated environment that responded to the people in it. These projects previously named GLOWFLOW, METAPLAY, and PSYCHIC SPACE were progressions that ultimately led to the development of VIDEOPLACE technology and what many believed to be the first interactive VR system. By using a mix of computer graphics, light projection, video cameras, and screen displays, it was able to measure the user’s position and allow users to communicate with each other despite being miles apart. [14]

MIT Aspen Movie Map

In 1977, the Aspen Movie Map was created by MIT. This program enabled users to virtually walk through Aspen city in Colorado by using photographs taken from a car with three modes included; summer, winter, and polygons. It was an interactive first-person view of the city similar to today’s google street view which gave the impression of moving in a first-person view. Ultimately, it suggested that VR could transport people to other places. [14]


VPL Research

In 1985, VR pioneers Jaron Lanier and Thomas Zimmerman founded Visual Programming Lab (VPL) Research. This company became known as the first ever to commercialize VR gears. VPL Research’s invention includes the Dataglove, EyePhone HMD, and Audio Sphere. The Dataglove is a device that uses a glove as a form of environmental input, costing $9000. The EyePhone HMD is a VR gear used to visually immerse users into a virtual environment, costing $9400. The Audio Sphere is a stereo unit that provides 360-degree spatial audio of the user’s surroundings. Ultimately in 1987, Jaron Lanier coined the term ”Virtual Reality” and popularized it to the public. However, the Eyephone suffered problems from limitations in technology, it was only able to generate up to 5 or 6 frames per second which caused dizziness and nausea to its users. [14]

Figure 6. VPL Research [17]

NASA For Astronaut Training

In 1989, NASA and Scott Foster, founder of Crystal River Engineering developed an audio element of the Virtual Environment Workstation Project (VIEW) - a VR training simulator for astronauts. By using the Power Glove for fine simulation of touch interaction and real-time spatial 3D audio, astronauts are able to train and run procedures without going to space. [14]

Virtuality by Virtuality Group

In 1991, The Virtuality Group released Virtuality, a series of arcade VR machines where users can play within a virtual 3D world in real-time. Players need to wear a pair of VR headsets and devices are networked together to allow for multi-player gaming experiences. Ultimately, Virtuality became the first-mass-produced VR entertainment system with popular arcade games such as Pac-Man. [13]

Figure 7. Virtuality Arcade Game [18]

Nintendo Virtual Boy

In 1995, Nintendo launched the Virtual Boy also known as VR-32. It was a 3D monochrome gaming console that people believed to be the first portable console to display 3D graphics. Despite its popularity commercialization was a failure within one year as it lacks colour graphics, software support, and ease of use. It was first released in Japan and North America for $180. [13]

Landmark VR For PTSD Treatment

In 1997, Georgia Tech and Emory University researchers collaborated to use VR for PTSD treatment during war zone scenarios - ultimately known as Virtual Vietnam. Researchers would provide controlled exposure to traumatic triggers for veterans and devise ways to treat symptoms of PTSD. VR technology gave researchers more in-depth information about what patients see and experience, instead of only analyzing from written reports. [14]


Google Street View

In 2007, Google in collaboration with Immersive Media introduced Street View. It was an improvement to Google Maps service with a street-level 360-degree view along the road. By using moving cars fitted with custom dodecahedral cameras, panoramic images were captured to show users roads, buildings, and more. [13]

Oculus Rift VR

In 2010, Palmer Luckey, an 18-year old invented the first prototype of the Oculus Rift VR headset. This Kickstarter campaign raised almost $2.5 million to fund the product and development of the first prototype. It featured a 90-degree field of vision, which had never been done before, combined with more computer processing power, gyroscopic sensors, and an accelerometer. Ultimately in 2014, Facebook bought the license of Oculus VR for $2 billion. [14]

Project Morpheus, Samsung Gear VR, Google Cardboard

In the same year, Sony unveils a VR headset for the PS4 console called, Project Morpheus. In the following year, Google introduced Cardboard, a low-cost, do-it-yourself stereoscopic viewer for mobile phones which gave a sense of depth and immersion to visual experiences. Samsung also announced Samsung Gear VR, a stereoscopic headset viewer for Samsung Galaxy smartphone users. [13]

HoloLens Headset By Microsoft

In 2016, Microsoft released what many believed to be the first MR device named, HoloLens. Unlike most VR headsets, HoloLens offered users see-through displays allowing users to see their physical environment while having the ability to interact with digital objects or Holograms as if they exist in the real world. Holograms are made of light and sound rendered into the HoloLens display which responds to movements, sight, voice, and even surfaces and environments around its users. [12]

ARKit and ARCore

In 2017, Apple introduced ARKit and Google introduced ARCore. Both are Software Development Kits (SDK) which allow developers to create AR-based applications for smartphones. Both SDKs can take advantage of features in our smartphones including motion capture, location and image tracking, light and distance sensors, and more. [19]

History of the Web

The Web, as it is known today, has become a central facet of society and has been integral to the development of the Information Age.

[1990s] Introduction to the World Wide Web

Sir Tim Berners-Lee, an English computer scientist, is credited for inventing the World Wide Web (WWW) in the early 1990s. In 1989, Berners-Lee was employed as a software engineer at CERN in Switzerland, where he developed his vision for what would become the World Wide Web. [20] Although his vision was never officially a company project, Berners-Lee was able to develop the three foundational blocks of Web 1 during his time at CERN. In 1991, CERN opened the World Wide Web to the public, allowing individuals from outside the company to join the web community.

The first version of the World Wide Web, Web 1, was built on three fundamental technologies which were developed by Berners-Lee in late 1990. HTML, URI, and HTTP still make up the foundation of today’s Web. HTML, or HyperText Markup Language, is the formatting language of the web. A web page is made up of numerous HTML elements which consist of an opening tag, content, and a closing tag to display the desired content on the web page. [21] A URI, or Uniform Resource Identifier, is also commonly referred to as the URL. The URI/ URL is a unique address given to every unique resource on the Web and is used to identify each resource. [21] It is common to refer to a resource’s URL as the ‘link’ to a certain web page, which is common terminology in today’s vernacular. The last element of the web is HTTP, or HyperText Transfer Protocol. This enables communication between web browsers and servers, which results in the retrieval of linked resources (URLs) upon request (searches). [21] These three elements of the Web act as the foundation of the Web today and the future versions to come.

Web 1 is commonly referred to as the ‘read-only’ web as the early stages of the web were mainly composed of simple, static web pages linked together. Web pages at this time were formatted relatively simply with text and images in the HTML code – early web pages of some URIs can still be found using archive sites like the Wayback Machine. Users were unable to create and upload their own web pages or content, as we are able to do today, making the first version of the web highly noninteractive. Due to this, Web 1 was primarily used as a source of information and reading material for users.

[2000s] Transition to Web 2

In the early 2000s, the way the Web was used shifted to the current state of the Web. There was no direct change in the technical specifications of the Web, however, the transition from Web 1 to Web 2 marked the migration from the read-only web to the read-write web. The term ‘Web 2’ was coined in 2004, following the Web 2.0 Summit held by Tim O’Reilly and Dale Dougherty, which discussed the new and improved version of the World Wide Web. [22] The transition from Web 1 to Web 2 occurred gradually throughout the late 1990s and early 2000s, as the web slowly became more interactive and developed.

The foundation of Web 2 is driven by social networking, user-generated content, and cloud computing. While Web 1 was primarily focused on providing accessible, readable content to users, the transition to Web 2 allowed users to produce their own content on the web, interact with one another, and much more. However, according to Berners-Lee, the difference between Web 1 and Web 2 is merely semantics; Berners-Lee’s vision for the Web was always a collaborative and interactive space. [22] Therefore, the transition to ‘Web 2’ solidified the standards that Web 1 had originally been set to accomplish.

Essentially, Web 2 is an enhanced version of Web 1. Beyond the basics of HTML in Web 1, web pages are now created using frameworks such as AJAX, JavaScript, and CSS which allows for more complex and interactive web pages. The World Wide Web heavily revolves around the flow of information and content; Web 2 allows users to receive as well as produce their own information and content. Users are able to receive and sort all information from the web easily and view dynamic content that is responsive to their input. Currently, most web pages, if not all, constantly have information flowing between the web page owners and the users in various different forms such as uploads, comments, and purchases. The web contains multiple online tools and platforms that allow users to share their experiences and thoughts and applications that interact with the user. Due to the interoperability nature of the web, the user is not only the end-user of the web but also an active participant. [23]

Features of Web 2

With how the web is currently being used, there are three main aspects that are often discussed: centralization, user-generated content, and monetization.


Centralization in the web indicates the domination of a few, pillar corporations that provide ‘free’ platforms and services to users with the tradeoff that users have their personal data collected. Personal data is often used to collect demographic information, issue targeted ads to users, and make business decisions. Network effects and economies of scale have resulted in these companies (i.e., Meta and Amazon) having the ability to dominate the web community. [24]

Within a centralized system, all participants are connected to a central authority, which means that the system has a low network diameter.[25] As all information is controlled by a central authority with powerful computational resources, information is disseminated to searching users quickly and with fewer resources needed. Where some users may find fault or issues with the system is that the ultimate source of truth and clarity of information is always the central authority.[25] The centralized system easily allows the central authority to control information and content being promoted to users, which results in censorship of information. Censorship may also take place on a larger scale – the central authority has the ability to cut off parts of the network, inhibiting it from interacting or collaborating with the remaining parts of the network.[25] This is often seen as a major fault of Web 2.

Due to heavy reliance on one server to operate the network, this means that Web 2 is more prone to risk. Malicious attacks on the network, such as viruses and malware, can be extremely harmful if directed at the central authority. If the central authority is breached, the entire network is disrupted.[25] In comparison, for a decentralized system, the network has multiple, interconnected servers. If servers were experiencing a malicious attack, the network will still be operable as they are not reliant on one single, powerful server to function. However, a single server reliance is beneficial when it comes to coordination among users. The central authority is able to ‘force’ users to adopt upgrades or perform system updates (i.e., app updates) with little pushback from users, as it is highly difficult for users to cause disagreements. On the other hand, coordination between network servers and users is difficult for a decentralized system as all servers are able to make their own decisions regarding upgrades and updates without agreeance from other servers or users. Conflicts amongst the servers and pushback from the users may result in errors and issues within the network.

User-Generated Content

Figure 1. Go Pro's YouTube Channel [26]

User-Generated Content (UGC) is any published material produced by a user unrelated to the creators of the platform or service, which usually comes in the form of text, images, videos, or reviews.[27] The transition from the read-only web to the read-write web had enabled users to develop and publish their own content on the web, indirectly establishing the idea of User-Generated Content. Alongside creating their own content, Web 2 also enables users to reuse, share, or edit other users’ content. The collaborative and interactive nature of Web 2 resulted in the development of a strong online community revolved around social networking with the creation of web apps, self-publishing sites, and social media sites. Social media sites are often the main platform thought of when discussing User-Generated Content. Social media allows users to publish their own content as well as engage and interact with one another in the form of likes, comments, and shares. Many different social media platforms have popularized and unpopularized since the early 2000s, however, the core premise remains the same: a ‘free’ service for users to produce content and interact with other users.

In the development of Web 2, UGC became increasingly popular due to users' enjoyment of contributing to the web and how effective UGC can be for businesses with an online presence. Previously, businesses were keen to utilize image or video advertisements to appeal to new and retain old customers, however, advertisements were deemed an untrustworthy representation of the business and its products and services.[27] Companies are expected to uphold a certain degree of professionalism and care to advertise their business, which can often decrease their efforts to be transparent. With the popularization of UGC, a shift occurred in businesses’ marketing tactics. Instead of developing advertisement plans, businesses began to develop methods of collecting and using UGC on their platforms, and benefits to users for offering their content to businesses. Many companies continue to use UGC to advertise their products and services, as it provides an unbiased view of the company and is more appealing to customers. For example, a majority of Go Pro’s YouTube content is UGC videos that they have filmed using Go Pro products.

Content Creation Monetization

User-Generated Content and monetization are often discussed in tandem with one another. Social media sites and UGC monetization through business use had given rise to the creator economy. The concept of Content Creation Monetization sums up three different aspects, as the name suggests: content, creation, and monetization. Content refers to any digital media on the web, which can be text, images, or videos. Creation is the development or production of content. Lastly, monetization indicates the ability to make money off of a product or service. Simply put, Content Creation Monetization is a method of leveraging independently-created digital media to earn money.

The creator economy is a community of independent individuals who leverage technology and social media to sell their knowledge and skills as a product, as a side hustle or full-time business. [28] These individuals range from social media influencers to bloggers to music producers. This was developed alongside the concept of UGC and the creation of numerous social media platforms in which users were offered other users monetary compensation in exchange for content. Currently, almost all social media platforms have some form of support for the creator economy. TikTok utilizes a ‘Creator Fund,’ in which creators within the Creator Fund are able to earn money based on a variety of factors surrounding their content.[29] Individuals can apply to be in the Creator Fund directly on the app and, granted they meet the set of standards, can begin receiving funds from TikTok. On the other hand, live streaming platforms, like Twitch, allow creators to receive their monetary compensation directly from other users that support them.

The creator economy has seen tremendous growth in the past couple of years due to the rising popularity of various social media apps, trends in the way social media is used, and the effects of the COVID-19 pandemic. Not only has the community grown, but creators are also leveraging greater opportunities to expand their careers as full-time content creators and finding opportunities beyond content creation on the platform they started on. For example, creators have been able to develop a cross-platform fan base which further aids in the monetization of their content and development of their own independent businesses and brands.[30]

There are four main areas of UGC development within the creator economy: music and podcasts, writing, video and live streams, and photography and design.

Music & Podcasts

Numerous media platforms that have been developed in Web 2 offer individuals the ability to create, record, and share their work with the web. Spotify, iTunes, Pandora, SoundCloud, and YouTube are easy to use with a low barrier to entry and can provide great reach without having to be a renowned artist or podcaster signed with a label.

Creators of music and podcasts can monetize their content through payouts from the platform(s) they are using. There are three different types of royalties that platforms use to pay their creators: mechanical royalty, public performance royalty, and payout to recording owners. [31] Payout by mechanical royalties, in terms of the streaming platforms mentioned above, refers to each time a user streams a song or episode. Public performance royalties are the fees artists or publishers receive each time their content is played publicly, for example, on a radio station or in a retail store. This is often difficult to control by the content creator themselves, and is discussed between the streaming platform and the Performance Rights Organizations (PROs). [31] Concurrently, payout to recording owners is a percentage-based distribution to all affiliated individuals of the total amount of money that the song or episode earns.

The earning potential for music and podcasts on any streaming platform is also tied to the number of streams per song or episode, which is directly correlated to the popularity of the creator. A revenue breakdown that creators can earn per stream is presented below, with Tidal being the highest playing platform. [31]

Figure 2. Fiverr [32]

Web 2 resulted in the creation of numerous self-publishing and freelance websites as a platform for bloggers, authors, and writers to share their thoughts and skills. The most popular form of writing on Web 2 that is still widely used, despite the ebb and flow of various social media sites, is blogs. A blog is essentially a regularly updated web page with a focus on providing information or opinions on various topics. [33] Although blogs can contain images or other media, the main focus is often the writing content of the blog post. Previously, blog posts used to be a simple, static information dump of the creators’ thoughts and opinions, displayed in reverse chronological order on the blog page. However, nowadays, many blogs and self-publishing websites that allow you to create your own blog pages incorporate interactive features, including comments. Examples of popular blog pages nowadays include food, fashion, lifestyle, and opinion blogs. Food or recipe blog pages often incorporate rating features for added engagement from their users. Simply writing blog posts is typically not enough to monetize one’s content. Other streams of income can come from related social media platforms, partnerships with companies, or platforms such as Patreon and Fiverr.

Fiverr is a freelance website that allows users to both request and sell services, for personal or business use, [34] which includes writing. The website is simple to use and easily connects individuals or businesses looking to hire with independent creatives that can fulfill their specific job needs. Jobs can range from writing company blog posts to individual resumes to copywriting which allows for a wealth of opportunities for individuals to leverage their skills.

Video & Live Streams

Video and live streaming has gained increasing popularity in the past couple of years, especially during the onset of the COVID-19 pandemic. There are two different forms of video and live stream entertainment: non-linear and linear. Non-linear entertainment refers to streaming (i.e., SVODs) that is preprogrammed and can be viewed “on-demand,” such as Netflix, Disney+, and Hulu. Linear entertainment refers to streaming that must be viewed at a specific time and location. [35] This wiki will be focusing on linear entertainment as it is directly related to UGC and the creator economy.

Figure 3. Disguised Toast [36]

Linear entertainment comes from platforms such as Twitch, YouTube, TikTok Live, and Instagram Live. Each platform has their own methodology of payout to their creators, however, there is typically some form of payout from the platform directly as well as funds from viewer subscriptions.

Jeremy Wang, otherwise known as ‘Disguised Toast,’ is a famous content creator well known for streaming on Twitch, and previously, YouTube. In 2018, Disguised Toast was becoming a newly established content creator streaming on Twitch. In a YouTube video,[37] he breaks down how much he earns monthly from streaming and what the revenue breakdown looks like. Although he was already fairly famous at the time, this puts into perspective the earning potential of a full-time streamer.

Income Avenue Earnings(monthly)
Donations $2,500
Advertisements $4,000
Subscriptions $14,000
Partnerships $1,000 - $10,000
Photography & Design

Web apps and pages such as Instagram, Snapchat, and Pinterest have definitely paved the way for photographers, designers, illustrators, and other artists to share their work. These platforms have also developed over time to be more suitable for individuals who prefer to use social media for business gains. Creators are able to create and publish their work on the web, engage in the community of their niche, and build an online portfolio. Nowadays, social media sites are used as a business platform, where creators can advertise their work and other users can request and purchase their products or services.

Similar to writing, freelance websites easily allow individuals or businesses to seek artists for commissioned pieces, graphic design, or UI design of their websites. Other sites, for example Patreon, offer creators the ability to charge a monthly fee to users to access exclusive content, an active community, and creative process insights. This allows artists to develop content they enjoy with a genuine fanbase and a steady stream of monthly income. Funds earned from Patreon are based on a number of variables: the size of the community, the portion of the community that’s interested in the membership offering, and the attractiveness and value of the membership offering. Additionally, creators have some agency over how much they are able to earn. Creators are able to develop their individual reward tiers for their membership offering, with varying prices and rewards. [38]

[Present Day] The Future of Web 3

Figure 4. Web 1, Web 2, and Web3 [39]

In the 2010s, the popularity of the ‘new’ Web decreased as the novelty of the platform dissipated and individuals began to generate new ideas of what the Web’s full potential could be.

The introduction of blockchain technology and cryptocurrencies in the late 2000s and early 2010s is what spurred the latest vision of the web, otherwise known as Web 3. The hope for Web 3 in comparison to the current state of the web is the potential to utilize a decentralized, transparent database that prevents moderators from censoring users’ content. [24]

Alongside the vision for a collaborative and interactive space, Berners-Lee advocated for the development of the Semantic Web, which is the desired development for the future of the Web. [22] In contrast to the structure of Web 2, in the Semantic Web, data are given meaning in order to further improve the back-end data infrastructure of the web, which better supports data mining and natural language searches. [22] Although this was envisioned by Berners-Lee during the first iteration of the web, the transition to Web 2 does not support the concept of the Semantic Web. Considering users’ desire for a more decentralized and less censored web, the future of the web, or Web 3, will include the concept of the Semantic Web to aid in achieving users’ desired online environment.

With the direction that the web is veering towards, the contribution from users and interaction between users will remain a strong focus of Web 3. NFTs, synthetic media and AI tools, the creator economy, and VR and metaverse are aspects of Web 3 that will position users as active participants in the web. [40] The creator economy had been popularized during the Web 2 ‘era,’ however, it is believed that the creator economy will continue to grow and develop with Web 3. With the decentralized future of the web, users will have more options and avenues for monetizing content.

What is Web 3

Web3 is the next iteration of the world wide web. It uses the blockchain and decentralized networks which creates aspects of the internet never seen before. This includes cryptocurrencies, tokens, decentralized organizations and Non-fungible tokens. It allows for ownership of digital assets on the internet which has the potential to revolutionize businesses and the internet itself. Crypto wallets allow for payments through these decentralized networks using cryptocurrencies. Aspects of Web3 are owned by its users and builders rather than large corporations. This includes DAOs and cryptocurrencies. Web3 is available for everyone to use around the world and there are no restrictions on who can use and access it. Web3 uses economic incentives such as proof of stake and proof of work to prevent reliance on third parties [41]


NFTs are digital assets that are traded on the blockchain and decentralized marketplaces. They are usually bought using cryptocurrencies. NFT stands for non-fungible token meaning which means it is a unique asset and there are no two of the same. Cryptocurrencies, on the other hand, are fungible, meaning there are millions of the exact same currency. NFTs are extremely important for the future of Web3 and extended reality.

NFTs in Business

The blockchain allows for a variety of business applications that were never previously possible. Secondary sales can be tracked using the blockchain and royalties can be taken. This creates immense opportunities in industries like ticket sales, the music industry, IP businesses and so much more.

Fly Fish Club

The Fly Fish Club is a restaurant created by famous entrepreneur Gary Vaynerchuk. The restaurant is exclusive only to holders of the NFT released by the company and they receive 5% on every secondary market sale of the NFT on a decentralized marketplace [42].

Web 3 and Extended Reality

Web3 and extended reality are heavily connected. Extended reality is being built on top of Web3 technologies at a rapid pace and the two are by nature intertwined


Decentraland is a virtual reality that users can enter using their VR tech or just a simple computer. It uses blockchain technologies to allow for ownership of items within Decentraland such as land and other virtual items. These items can be sold on any decentralized marketplace and items outside the game on the blockchain can also be brought into Decentraland. A piece of land in decentral and sold over 2 million dollars [43]

Business in Virtual Worlds

Many large corporations are investing advertising dollars within these virtual worlds that use blockchain technology. Samsung did a launch of the Samsung S20 within a virtual Samsung store in Decentraland. Adidas bought a piece of land in the Sandbox (another Web3 virtual world) and opened up a virtual store. Miller lite bought a piece of virtual land and opened a bar during the Superbowl in Decentraland.

The Future of Extended Reality

In the near future, it is likely that developments in the realm of Extended Reality will continue to accelerate and become more advanced. Extended Reality applications are more often discussed for personal use, however, there has been an increase in popularity of using Extended Reality in terms of business and education. However, with every new technology or idea presented to the public, implications are always present.


Customer Experience

Figure 5. Walmart Using Oculus VR [44]

Extended Reality applications have been increasing in popularity in retail, real estate, and customer service in order to improve customer experience. Using Augmented Reality, customers are able to try out clothing, furniture, accessories, and other products in the comfort of their own homes. [45] Virtual Reality stagings of properties have been a trend recently, especially with the COVID-19 pandemic and foreign buyers. Prospective customers are able to tour properties in a virtual environment, without having to be present at the property or even in the same country. Additionally, there are also features in which furniture or other objects can be moved around which enables customers to determine a clearer vision of what their future property could look like.

Training & Development

Numerous companies have adopted Extended Reality applications for the training and development of their employees. [45] Using Virtual Reality technology, employees are able to engage in high-risk or dangerous simulations in a virtual environment, without putting them in any harm while still being able to learn the content. Virtual Reality offers an experiential learning experience that traditional training does not.

Walmart began using Oculus VR headsets as part of their employee training. [46] The headsets are specifically used to train employees on new technology, soft skills, and compliance. With the new training, Walmart has seen an increase in employee confidence, retention, and training test scores.

Gaming & Entertainment

Figure 6. Travis Scott Fortnite Concert [47]

The gaming industry is considered one of the earliest adopters of Extended Reality applications. [45] In 2016, Sony PlayStation launched PlayStation VR. [48] Although other organizations have released numerous improved VR technology since then, this gaming release displayed the capability and future of VR applications. XR is used in gaming as a way for users to escape reality and is often popular for games that allow them to be immersed in environments they would otherwise not be able to go.

Virtual Reality events have also grown in popularity over the recent years, especially since the start of the COVID-19 pandemic. Back in April 2020, Travis Scott held a fully virtual concert in Fortnite, [49] in which users were able to ‘see’ Travis Scott perform live using VR applications. The event had garnered over 25 million unique players/ attendees.

Remote Work

The COVID-19 pandemic has proven that there are numerous methods for colleagues to work together remotely, in an efficient and effective manner. Using Extended Reality technologies, employees can easily connect to others in their office or team and collaborate on shared work. It is believed that VR connections are better than simple, online connections (i.e., Zoom and Slack), as it allows individuals to register the experience as if they are actually present. [40] VR headsets can also be used for corporate meetings and cultural events.

Operational Efficiency

The future of Extended Reality is incorporating technology into everyday company operations. Jaguar Land Rover uses VR technology to design and engineer new vehicles in a cost-efficient and effective way. [50] The company’s engineers generated 3D models of each car part, which allowed them to visualize every aspect of the vehicle and compile them all together in a virtual environment. VR technology helps engineers to simulate how components and systems of the car perform, prior to building real-life prototypes of the car model. This had aided in the reduction of costs and time that is otherwise necessary for building multiple physical prototypes.


Data Privacy

As Extended Reality applications are typically able to track users’ movements, behaviours, and emotions, data privacy is a large concern for users. [45] Extended Reality technology utilizes biometric data to ensure a highly immersive and engaging experience for users. However, this also increases privacy risks. In order to overcome this barrier, organizations must determine how to best protect and store users’ data to ensure confidentiality and privacy. The nature of Extended Reality technology makes it difficult to mitigate data risks with existing privacy policies in place for other technology.


A fully immersive organization or area of work is highly unlikely without the support of a large investment. For example, a Meta Quest 2 costs upwards of $600 per technology, which would be a large investment for a company with a large staff. [51] The Mercury Extended Reality System would cost organizations approximately $1,500 to $10,000 or more per month, depending on integration levels and features of the desired system. [52] Although the long-term benefits outweigh the short-term costs, many companies do not have the resources to commit to an XR workplace and infrastructure.

Cultural Implications

Although the idea of Extended Reality has been around for a long time, many individuals are still skeptical regarding the Extended Reality infrastructure and engaging in a virtual environment. Some individuals may be less willing to integrate Extended Reality applications into their lives than others, which can prove to be a problem if society is migrating towards a heavily Extended Reality reliant environment. This can also result in large issues in the workplace.


The web has undoubtedly become a significant aspect of our daily lives and will continue to adapt in the future dependent on society’s wants and needs. In addition to the evolving web, the Extended Reality industry will see rapid advances in the near future and greater incorporation into everyday life. There will be improvements in VR technology in that they will be lighter, faster, more affordable, and more usable for everyone. Advances in smartphone technology incorporated with advancements in XR technology will result in improved AR and VR experiences using smartphones. The evolving web, Extended Reality tools, and societal norms display how the future of technology is highly uncertain and unpredictable.


Jessica Pham Jordan Harjono Luca Salzano
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada


  1. 1.0 1.1 https://www.forbes.com/sites/bernardmarr/2019/08/12/what-is-extended-reality-technology-a-simple-explanation-for-anyone/?sh=4e71ed777249
  2. https://www.interaction-design.org/literature/article/beyond-ar-vs-vr-what-is-the-difference-between-ar-vs-mr-vs-vr-vs-xr
  3. 3.0 3.1 3.2 3.3 https://www.resco.net/blog/what-is-augmented-reality-and-how-does-ar-work/
  4. 4.0 4.1 4.2 4.3 https://www.analyticsinsight.net/an-overview-understanding-different-types-of-augmented-reality/
  5. 5.0 5.1 https://cointelegraph.com/metaverse-for-beginners/augmented-reality-vs-virtual-reality-key-differences
  6. https://www.resco.net/blog/what-is-augmented-reality-and-how-does-ar-work/
  7. https://www.xrtoday.com/virtual-reality/how-does-virtual-reality-work/
  8. https://www.softwaretestinghelp.com/what-is-virtual-reality/#Application_Of_Virtual_Reality
  9. 9.0 9.1 9.2 https://rextheme.com/types-of-virtual-reality/
  10. 10.0 10.1 10.2 https://www.marxentlabs.com/what-is-virtual-reality/
  11. 11.0 11.1 11.2 https://www.softwaretestinghelp.com/what-is-virtual-reality/#Application_Of_Virtual_Reality
  12. 12.0 12.1 12.2 12.3 12.4 12.5 12.6 12.7 12.8 https://docs.microsoft.com/en-us/windows/mixed-reality/discover/get-started-with-mr
  13. 13.0 13.1 13.2 13.3 13.4 https://www.vrs.org.uk/virtual-reality/history.html
  14. 14.0 14.1 14.2 14.3 14.4 14.5 14.6 14.7 14.8 https://virtualspeech.com/blog/history-of-vr
  15. https://www.g2.com/articles/history-of-virtual-reality#:~:text=Virtual%20reality%20technology%20was%20invented,1987%20by%20researcher%20Jaron%20Lanier.
  16. https://www.g2.com/articles/history-of-virtual-reality#:~:text=Virtual%20reality%20technology%20was%20invented,1987%20by%20researcher%20Jaron%20Lanier.
  17. https://www.g2.com/articles/history-of-virtual-reality#:~:text=Virtual%20reality%20technology%20was%20invented,1987%20by%20researcher%20Jaron%20Lanier.
  18. https://www.g2.com/articles/history-of-virtual-reality#:~:text=Virtual%20reality%20technology%20was%20invented,1987%20by%20researcher%20Jaron%20Lanier.
  19. https://arsenal.cgtrader.com/blog/arkit-vs-arcore
  20. https://webfoundation.org/about/vision/history-of-the-web/
  21. 21.0 21.1 21.2 https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/HTML_basics
  22. 22.0 22.1 22.2 22.3 https://www.britannica.com/topic/Web-20
  23. https://www.geeksforgeeks.org/web-1-0-web-2-0-and-web-3-0-with-their-difference/
  24. 24.0 24.1 https://hbr.org/2022/05/what-is-web3
  25. 25.0 25.1 25.2 25.3 https://ethereum.org/en/developers/docs/web2-vs-web3/
  26. https://instapage.com/blog/what-is-user-generated-content
  27. 27.0 27.1 https://instapage.com/blog/what-is-user-generated-content
  28. https://vimeo.com/blog/post/creator-economy/
  29. https://newsroom.tiktok.com/en-gb/tiktok-creator-fund-your-questions-answered
  30. https://signalfire.com/blog/creator-economy/
  31. 31.0 31.1 31.2 https://www.headphonesty.com/2021/11/how-much-does-spotify-pay-per-stream/
  32. https://www.fiverr.com/
  33. https://blog.hubspot.com/marketing/what-is-a-blog
  34. https://www.fiverr.com/
  35. https://www.linkedin.com/pulse/linear-vs-non-linear-television-guillaume-morel/
  36. https://www.youtube.com/watch?v=6m5P_n5njCQ&ab_channel=DisguisedToast
  37. https://www.youtube.com/watch?v=6m5P_n5njCQ&ab_channel=DisguisedToast
  38. https://blog.patreon.com/figuring-out-how-much-you-might-make-on-patreon
  39. https://zipmex.com/learn/web-3-explained/
  40. 40.0 40.1 https://www.forbes.com/sites/forbesbusinesscouncil/2022/03/31/how-web3-will-change-content-monetization-tools/?sh=513e48ce6074
  41. https://ethereum.org/en/web3/
  42. https://www.flyfishclub.com/
  43. https://nationalpost.com/news/canada/canadian-crypto-investor-snags-virtual-real-estate-plot-for-record-2-4-million
  44. https://corporate.walmart.com/newsroom/innovation/20180920/how-vr-is-transforming-the-way-we-train-associates
  45. 45.0 45.1 45.2 45.3 https://www.deltecbank.com/2021/05/12/what-is-extended-reality-and-how-does-it-impact-different-industries/?locale=en
  46. https://corporate.walmart.com/newsroom/innovation/20180920/how-vr-is-transforming-the-way-we-train-associates
  47. https://www.youtube.com/watch?v=wYeFAlVC8qU
  48. https://www.playstation.com/en-ca/ps-vr/
  49. https://www.nbcnews.com/think/opinion/fortnite-s-travis-scott-concert-was-historic-he-s-not-ncna1195686
  50. https://techwireasia.com/2020/02/how-jaguar-land-rover-uses-vr-to-design-and-engineer-new-vehicles/
  51. https://www.oculus.com/experiences/quest/
  52. https://roundtablelearning.com/what-is-the-cost-of-mercury-extended-reality-system-pricing-and-3-cost-factors/
Personal tools