The Dead Internet Theory: Separating Fact from Fiction

The Dead Internet TheoryIn recent years, a concept known as the “Dead Internet Theory” has emerged, suggesting that the internet as we know it is slowly dying or becoming obsolete. This theory has sparked debates and raised questions about the future of the digital age. In this article, we will explore the Dead Internet Theory, its origins, its arguments, and the reality behind this provocative concept.

Understanding the Dead Internet Theory:

The Dead Internet Theory posits that the internet is losing its original spirit of openness, freedom, and decentralization. Proponents argue that increasing corporate control, government surveillance, censorship, and the dominance of a few powerful entities are stifling innovation and transforming the internet into a highly regulated and centralized platform. They claim that the internet is losing its vitality and becoming a mere reflection of offline power structures.

Origins and Key Arguments:

The origins of the Dead Internet Theory can be traced back to concerns raised by early internet pioneers and activists who championed a free and open internet. They feared that the commercialization and consolidation of online platforms would undermine the principles that made the internet a transformative force.

Among the concerns put forth by the Dead Internet Theory are fake traffic from bots, and fake user accounts. According to numerous sources, a significant percent of Internet traffic is from bots. Estimates range from a low of 42% of traffic, to over 66% of traffic is from bots. Bots are automated software programs that perform various tasks, ranging from simple to complex, on the internet. They can be beneficial, such as search engine crawlers or chatbots, or malicious, like spam bots or distributed denial-of-service (DDoS) attack bots.

The problems associated with bot traffic arise primarily from malicious bots that engage in fraudulent activities, spamming, data scraping, click fraud, credential stuffing, and more. These activities can have severe consequences, including financial losses, compromised security, reputational damage, and disruptions to legitimate online services.

Social media platforms have been grappling with the challenge of fake user accounts for some time. These accounts are created for various purposes, including spreading misinformation, engaging in spam or fraudulent activities, manipulating public opinion, or conducting malicious campaigns. Increasingly, bots use these fake accounts “push” a narrative. This activity is very noticeable during political elections.

Proponents of The Dead Internet Theory highlight several key arguments:

  • Centralization and Monopolistic Power: They argue that a small number of tech giants now dominate the internet, controlling vast amounts of data and shaping user experiences. This concentration of power limits competition and stifles smaller players’ ability to innovate.
  • Surveillance and Privacy Concerns: With the rise of surveillance technologies and data breaches, privacy advocates express worry that individuals’ online activities are constantly monitored and exploited for various purposes, eroding trust in the internet.
  • Censorship and Content Control: The theory also highlights instances of government-imposed censorship, content moderation challenges, and algorithmic biases, suggesting that freedom of expression is under threat.
  • Net Neutrality and Access: Advocates argue that the internet’s openness is compromised by practices that prioritize certain types of traffic or restrict access based on geographic location or socioeconomic factors, leading to a digital divide.

The Reality:

While the concerns raised by the Dead Internet Theory hold some validity, it is important to approach the subject with nuance. The internet remains a dynamic and evolving medium, shaped by technological advancements and societal changes. While challenges exist, numerous initiatives and movements aim to preserve the internet’s founding principles.

Efforts such as decentralized technologies (like blockchain), open-source software, encryption tools, and net neutrality advocacy strive to counteract centralization, surveillance, and censorship. Additionally, the proliferation of alternative platforms, social networks, and online communities ensures that diverse voices and opinions can still find a space online.

Final Thoughts:

The Dead Internet Theory serves as a reminder of the ongoing struggle to maintain an open, free, and decentralized internet. While concerns over centralization, surveillance, and censorship are valid, the internet is not irreversibly “dead.” It continues to evolve, driven by the collective actions of individuals, organizations, and policymakers. It is important that the internet remains a powerful tool for connectivity, knowledge-sharing, and empowerment.


WHAT IS WEB 3.0?

For too long a few large corporations(aka: Big Tech) have dominated the Internet. In the process they have taken control away from users. Web 3.0 could put an end to that, with many additional benefits.

What is Web 3.0?
The Big Tech companies take user’s personal data, track their activity across websites, and use that information for profit. If these large corporations disagree with what a user posts, they can censor them even if the user posts accurate information. In many cases, users are permanently blocked(deplatformed) from using a website or platform. These large corporations use creative marketing strategies, aggressive business deals, and lobbyists to consolidate their power and promote proprietary technology at the expense of Open Source solutions and data privacy.

If you care about regaining ownership of your personal data, you should be excited about Web 3.0. If you want an Internet that provides equal benefit to all users, you should be excited about Web 3.0. If you want the benefits of decentralization, you should be excited about Web 3.0.

To understand Web 3.0, it is beneficial to have a brief history of the Internet.

Web 1.0 (1989 – 2005)

Web 1.0 was the first stage of the internet. It is also frequently called the Static Web. There were a small number of content creators and most internet users were consumers. It was characterized by few people creating content and more people on the internet consuming content. It was built on decentralized and community-governed protocols. Web 1.0 was mostly static pages that had limited functionality and were commonly hosted by Internet Service Providers(ISP).

Web 2.0 (2005 – 2022)

Web 2.0 was also called the Dynamic Internet and the Participative Social Web. It was primarily driven by three core areas of innovation: Mobile, Social, and Cloud Computing. The introduction of the iPhone in 2007 brought about the mobile and drastically broadened both the user-base and the usage of the Internet.

The Social Internet was introduced by websites like Friendster(2003), MySpace(2003) and Facebook(2004). These social networks coaxed users into content generation, including sharing photos, recommendations, and referrals.

Cloud Computing commoditised the production and maintenance of internet servers, web pages, and web applications. It introduced concepts like Virtual Servers, and Software as a Service(SaaS). Amazon Web Services(AWS) launched in 2006 by releasing the Simple Storage Service(S3). Cloud computing is Web-based computing that allows businesses and individuals to consume computing resources such as virtual machines, databases, processing, memory, services, storage, messaging, events, and pay-as-you-go. Cloud Computing continues to grow rapidly driven by advanced technologies such as Artificial Intelligence(AI), Machine Learning(ML), and the continued adoption of cloud based solutions by enterprises.

Unfortunately Web 2.0 also brought about centralization, surveillance, tracking, invasive ads, censorship, deplatforming and the dominance of Big Tech.

Web 3.0(In Progress)

Web 3.0 is still an evolving and broad ranging concept but rapid progress is being made. It was originally thought that the next generation of the Internet would be the Semantic Web. Tim Berners-Lee(known as the inventor of the World Wide Web) coined the term to describe a more intelligent Internet in which machines would process content in a humanlike way and understood information on websites both contextually and conceptually. Some progress was made on helping machines understand concept and context via metadata markup standards like Microformats. Web 3.0 was to also be a return to the original concept of the web, a place where one does not need permission from a central authority to post, there is no central control, and there is no single point of failure.

Those are idealistic goals, but recent technology developments like Blockchain have expanded the idea of what Web 3.0 could represent. The framework for Web 3.0 was expanded to include decentralization, self-governance, artificial intelligence, and token based economics. Web 3.0 includes a leap forward to open, trustless and permissionless networks.

The rise of technologies such as distributed ledgers and storage on blockchain will allow for data to be decentralized and will create a secure and transparent environment. This will hopefully put and end to most of Web 2.0’s centralization, surveillance, and exploitative advertising.

The adoption of Cryptocurrency and Digital Assets are also a major part of Web 3.0. The monetization strategy of Big Tech selling user data was introduced by Web 2.0. Web 3.0 introduces monetization and micropayments using cryptocurrencies. This can be used to reward developers and users of Decentralized Applications(DApp). This ensures the stability and security of a decentralized network. Web 3.0 gives users informed consent when selling their data, and gives the profits back to the user via digital assets and digital currency. Cryptocurrencies also use Decentralized finance(DeFi) solutions to implement cross-border payments more effectively than traditional payment channels. The Metaverse and real time 3D Worlds are also part of what is envisioned for Web 3.0.

If you understand the benefits of decentralization, privacy, and open solutions it’s time to recognize the importance of Web 3.0!


How Leading Technology Companies Are Using Artificial Intelligence And Machine Learning

Artificial Intelligence looks for patterns, learns from experience, and predicts responses based on historical data. Artificial Intelligence is able to learn new things at incredible speeds. Artificial Intelligence can be used to accurately predict your behavior and preempt your requests.

Artificial Intelligence And Machine LearningArtificial Intelligence and Machine Learning are shaping many of the products and services you interact with every day. In future blog posts I will be discussing how Artificial Intelligence, Machine Learning, Neural Networks, and Predictive Analytics are being used by Marketers to achieve competitive advantage.

AI’s (Artificial Intelligence) ability to simulate human thinking means it can streamline our lives. It can preempt our needs and requests, making products and services more user friendly as machines learn our needs and figure out how to serve us better.

Here are how some of the top companies are using Artificial Intelligence.

Google

Google is investing heavily in Artificial Intelligence and Machine Learning. Google acquired the AI company DeepMind for the energy consumption, digital health and general purpose Artificial Intelligence programs. It is integrating it into many of its products and services. They are primarily using TensorFlow – an open source software library for high performance numerical computation. They are using Artificial Intelligence and pattern recognition to improve their core search services. Google is also using AI and machine learning for their facial recognition services, and for natural language processing to power their real-time language translation. Google Assistant uses Artificial Antelligence, as does the Google Home series of smart home products, like the Nest thermostat. Google is using a TensorFlow model in Gmail to understand the context of an email and predict likely replies. They call this feature “Smart Replies.” After acquiring more than 50 AI startups in 2015-16, this seems like only the beginning for Google’s AI agenda. You can learn more about Google’s AI projects here: ai.google/.

Amazon

Amazon has been investing heavily in Artificial Intelligence for over 20 years. Amazon’s approach to AI is called a “flywheel”. At Amazon, the flywheel approach keeps AI innovation humming along and encourages energy and knowledge to spread to other areas of the company. Amazon’s flywheel approach means that innovation around machine learning in one area of the company fuels the efforts of other teams. Artificial Intelligence and Machine learning (ML) algorithms drive many of their internal systems. Artificial Intelligence is also core to their customer experience – from Amazon.com’s recommendations engine that is used to analyze and predict your shopping patterns, to Echo powered by Alexa, and the path optimization in their fulfillment centers. Amazon’s mission is to share their Artificial Intellgience and Machine Learning capabilities as fully managed services, and put them into the hands of every developer and data scientist on Amazon Web Services(AWS). Learn more about Amazon Artificial Intelligence and Machine Learning.

Facebook

Facebook has come under fire for their widespread use of Artificial Intelligence analytics to target users for marketing and messaging purposes, but they remain committed to advancing the field of machine intelligence and are creating new technologies to give people better ways to communicate. They have also come under fire for not doing enough to moderate content on their platform. Billions of text posts, photos, and videos are uploaded to Facebook every day. It is impossible for human moderators to comprehensively sift through that much content. Facebook uses artificial intelligence to suggest photo tags, populate your newsfeed, and detect bots and fake users. A new system, codenamed “Rosetta,” helps teams at Facebook and Instagram identify text within images to better understand what their subject is and more easily classify them for search or to flag abusive content. Facebook’s Rosetta system scans over a billion images and video frames daily across multiple languages in real time. Learn more about Facebook AI Research. Facebook also has several Open Source Tools For Advancing The World’s AI.

Microsoft

Microsoft added Research and AI as their fourth silo alongside Office, Windows, and Cloud, with the stated goal of making broad-spectrum AI application more accessible and everyday machines more intelligent. Microsoft is integrating Artificial Intelligence into a broad range of Microsoft products and services. Cortana is powered by machine learning, allowing the virtual assistant to build insight and expertise over time. AI in Office 365 helps users expand their creativity, connect to relevant information, and surface new insights. Microsoft Dynamics 365 business applications that use Artificial Intelligence and Machine Learning to analyze data to improve your business processes and deliver predictive analytics. Bing is using advances in Artificial Intelligence to make it even easier to find what you’re looking for. Microsoft’s Azure Cloud Computing Services has a wide portfolio of AI productivity tools and services. Microsoft’s Machine Learning Studio is a powerfully simple browser-based, visual drag-and-drop authoring environment where no coding is necessary.

Apple

Apple is the most tight-lipped among top technology companies about their AI research. Siri was one of the first widely used examples of Artificial Intelligence used by consumers. Apple had a head start, but appears to have fallen behind their competitors. Apple’s Artificial Intelligence strategy continues to be focused on running workloads locally on devices, rather than running them on cloud-based resources like Google, Amazon, and Microsoft do. This is consistent with Apple’s stance on respecting User Privacy. Apple believes their approach has some advantages. They have a framework called Create ML that app makers can use to train AI models on Macs. Create ML is the Machine Learning framework used across Apple products, including Siri, Camera, and QuickType. Apple has also added Artificial Intelligence and Machine Learning to its Core ML software allowing developers to easily incorporate AI models into apps for iPhones and other Apple devices. It remains to be seen if Apple can get developers using the Create ML technology, but given the number of Apple devices consumers have, I expect they will get some traction with it.

These are just a few examples of how leading technology companies are using artificial intelligence to improve the products and services we use everyday.


Targeted Marketing Models

Targeting the best prospective core customers within the markets that have the highest probability of success establishes clear marketing goals that can be accurately measured by customer acquisition and sales conversion.

Targeted Marketing Models
In a previous post I covered Marketing Uplift & Predictive Modeling.

This post will discuss using Predictive Modeling & Marketing Uplift combined with classic Customer Analytics to create more accurate Targeted Marketing Campaigns.

The foundation of any accurate marketing model is a precise definition of your core customer, which encompasses 4 key aspects:

  1. Who your best customers are
  2. Where your best potential customers can be found
  3. What messages those customers respond to & when to send them
  4. The value potential the customers have to you, either in terms of dollars or visits

Customer analytic solutions have traditionally been used to gather some of those customer insights(Data Dimensions).

However, a properly defined Marketing Model combined with Customer Analytics can help not only accurately quantify the benefit for your marketing programs, but build out a deeper definition of your core customer.

Predictive Modeling can be used to understand the potential benefit of a new marketing program before it is launched. It can help you predict if it is a good investment or a bad one.

It can also be used to optimize existing marketing programs by establishing and analyzing key metrics.

The ability to measure, analyze, and assess existing marketing programs can identify which programs are not meeting their potential and then determine the gap between actual performance and ultimate potential.

Correlating this marketing information with known customer data dimensions allows you to refine your Core Customer Profile.

This insight allows you to immediately reprioritize resources for management, re-assign budget and marketing from the initiatives with no upside to the ones that are just underperforming.

Insights from customer analytics and marketing models on who core customers are, where potential core customers are located within an underperforming market area, what messages they respond to and when they respond to them, and the potential value of each of those core customers makes it possible to develop a focused demographic & psychographic profile of customers to target, and a measurable marketing campaign to go after them.

BriteWire sends the right message, to the right customer, at the right time.


Marketing Uplift & Predictive Modeling

Marketing Uplift(aka Marketing Lift) and Predictive Modeling are hot concepts in marketing. In this post we take a quick look at these marketing techniques.

Marketing UpliftMarketing Uplift(aka Marketing Lift) is the difference in response rate between a treated group and a randomized control group.

Marketing Uplift Modeling can be defined as improving (upping) lift through predictive modeling.

A Predictive Model predicts the incremental response to the marketing action. It is a data mining technique that can be applied to engagement and conversion rates.

Uplift Modeling uses both the treated and control customers to build a predictive model that focuses on the incremental response.

Traditional Response Modeling only uses the treated group to build a predictive modeling. Predictive Modeling separates the likely responders from the non-responders.

Traditional Response Modeling segments an audience into the following primary groups:

  • The Persuadables : audience members who only respond to the marketing action because they were targeted
  • The Sure Things : audience members who would have responded whether they were targeted or not
  • The Lost Causes : audience members who will not respond irrespective of whether or not they are targeted
  • The Do Not Disturbs or Sleeping Dogs : audience members who are less likely to respond because they were targeted

The only segment that provides true incremental responses is the Persuadables.

Because uplift modelling focuses on incremental responses only, it provides very strong return on investment cases when applied to traditional demand generation and retention activities. For example, by only targeting the persuadable customers in an outbound marketing campaign, the contact costs and hence the return per unit spend can be dramatically improved.

One of the most effective uses of uplift modelling is in the removal of negative effects from retention campaigns. Both in the telecommunications and financial services industries often retention campaigns can trigger customers to cancel a contract or policy. Uplift modelling allows these customers, the Do Not Disturbs, to be removed from the campaign.


Big Data & Psychometric Marketing

Psychometrics, sometimes also called psychographics, focuses on measuring psychological traits, such as personality. Psychologists developed a model that sought to assess human beings based on five personality traits, known as the “Big Five.” Big data correlated with personality profiles allows for accurate psychographic targeting by marketers.

Psychometric Marketing

Psychometrics is a field of study concerned with the theory and technique of psychological measurement. Psychometric research involves two major tasks. 1. The construction of instruments. 2. The development of procedures for measurement. Practitioners are described as psychometricians. The most common model for expressing an individual’s psychometric personality are the Big Five personality traits.

5 Personality Traits (Big Five)

  • Openness (how open you are to new experiences?)
  • Conscientiousness (how much of a perfectionist are you?)
  • Extroversion (how sociable are you?)
  • Agreeableness (how considerate and cooperative you are?)
  • Neuroticism (are you easily upset?)

Based on these 5 dimensions, also known as OCEAN(Openness, Conscientiousness, Extroversion, Agreeableness, Neuroticism) we can make a relatively accurate assessment of a person. This includes their needs and fears, and how they are likely to behave.

The “Big Five” has become the standard technique of psychometrics. The problem with this approach has been data collection, because it required filling out a lengthy, highly personal questionnaire.

With the Internet, and cell phones this is no longer a problem.

The Internet allows researchers to collect all sorts of online data from users and using Data Correlation they can associate online actions with personality types. Remarkably reliable deductions can be drawn from simple online actions. What users “liked,” shared or posted on Facebook, or what gender, age, place of residence they specified can be directly correlated with personality traits.

While each individual piece of such information is too weak to produce a reliable prediction, when tens, hundreds, or thousands of individual data points are combined the resulting predictions become very accurate. This enables data scientists and marketers to connect the dots and make predictions on people’s demographic and psychographic behavior with astonishing accuracy.

Smartphones are a psychological questionnaire on their users that is constantly being updated, both consciously and unconsciously. It is not uncommon for data analytics companies to have over 4,000 data points for each person in their target data set.

The strength of their psychometric modeling was illustrated by how well it could predict a subject’s answers.

It is now possible to assign Big Five values based purely on how many profile pictures a person has on Facebook, or how many contacts they have (a good indicator of extroversion).

Psychological profiles be created from this data, but the data can also be used the other way round… to search for specific profiles.

This allows for accurate psychographic targeting by marketers in ways that have never before been possible.

BriteWire Intelligent Internet Marketing facilitates building Psychometric User Profiles with purposefully designed content interactions, and automates marketing responses based on those triggers.


PropTech Disrupting The Real Estate Industry

Real Estate is the largest and most valuable asset class in the world but the Real Estate industry is ripe for disruption. Traditional real estate brokerages are under increasing competitive pressure as technology companies make inroads into the previously walled off real estate market. This complex, multi-faceted and high-stakes industry is rapidly becoming one of the hottest markets for entrepreneurs and investors alike.

Proptech - Fintech

PropTech(short for Property Technology) refers to the sector of startups and new technologies cropping up in response to decades of inefficiencies and antiquated processes in the real estate industry. The term is being used to encapsulate the entire market space of technology and real estate coming together to disrupt the traditional real estate model. PropTech is also referred to as CREtech (Commercial Real Estate Technology) or REtech (Real Estate Technology).

The PropTech sector is generating increased buzz as more people realize the opportunities for innovation in this sector, and venture capital investments expand. Real estate tech startups around the globe raised $1.7 billion worth of investments in 2015 – that’s an 821% increase in funding compared to 2011’s total. PropTech companies have raised around $6.4 billion in funding from 2012 to 2016. It is difficult to pin down exact numbers, but it is estimated that investment in the space increased by 40% for 2016. Compass, Homelink, SMS Assist and OpenDoor Labs all saw their valuations increase to over $1 billion in 2016.

Internet technology has tremendous potential to improve transparency and efficiency in real estate transactions. Property Listings and Property Search was the initial area for improvement. Over the past several years, technology advancements in property search and listing engines have been introduced.

These advancements allow home buyers to more easily find their home by making search criteria easier to specify. Buyers can specify the obvious search criteria like price range, size, number of bedrooms, and location. Additional facets of navigation like lot size, water features, adjacent public land, conservation easements, and if horses are allowed can also be used as search criteria. Buyers can view maps of where the homes for sale are located, and find homes for sale using maps as their primary search navigation tool. Search criteria can be saved in property search engines so that when new homes come on the market that match a buyer’s predefined criteria they are automatically notified.

Advancements on property listings have allowed a listing to easily be syndicated to hundreds of websites with the click of a mouse. This dramatically expands the listing’s exposure to potential buyers.

Technological advancements in deep learning, AI and other big data technologies are driving significant innovation in all areas of the property technology sector. Advancements in video, 3D Modeling, and virtual reality are allowing buyers to virtually tour homes they are interested in. These advancements allow agents to more effectively showcase the properties they are selling. However, these advancements in moving key data to the Internet may also mitigate the role that real estate agents play in the real estate process.

Competition in the space is increasing, but many tech companies do not have access to the industry and associated data due to real estate laws and regulations on disclosure. In many states it is illegal to disclose the price of a real estate transaction, and only members of the local real estate MLS have access to all the listing information.

What is next? Investor confidence is high and reports are that a massive amount of investment capital will be pumped into the real estate sector. It is safe to assume that we will see significant changes in the technical landscape for the real estate market.