Using Data Analysis To Design Better Photos

Better cameras don’t necessarily take photos that people think are better.

The visual image criteria that customers respond favorably to is an ongoing area of interest to me. I routinely use knowledge gained through A/B testing of images to achieve competitive advantage by producing images specifically designed to appeal to specific users.

In a previous article I discussed how Netflix uses image analysis to create visual content.

This data analysis reveals trends in color preferences and style for images used at Netflix.

Cameras get better each year with sensors capable of capturing more pixels, and more dynamic range. The result are images that are sharp, and have more details in shadows and highlights than ever before.

Advances in Computational Photography allow cell phone cameras to automatically take “better photos” without extra effort and knowledge of photography. Throwing processing power at raw images lets smartphones and cameras do some amazing things. Computational Photography tweaks settings between each shot ensuring that people and scenes always “look their best”.

At least that is the claim.

The problem is that better cameras don’t necessarily take photos that the majority of people prefer because they don’t take trends in color preference and style into consideration.

Marques Brownlee has a popular channel on Youtube where he reviews tech products. He created a video called “The Blind Smartphone Camera Test 2018!” which revealed that “better cameras” are not capturing and processing photos that the vast majority of people prefer to the images captured by less capable cameras.

Using Data Analysis To Design Better Photos
O = iPhone X
P = Xiaomi Pocophone F1
Image Credit: Marques Brownlee
To get as large of an audience as possible, the blind tests Marques used were performed on images posted and viewed on Twitter and Instagram and resulted in over 6 million votes in the polls for best images.

He used 16 of the most popular mobile phones in a winner takes all bracket style competition. He included powerful phones like the Pixel 3, Hydrogen, and the iPhone X, and iPhone XS along with less capable cameras used by the Blackberry and Xiaomi Pocophone.

Most images are viewed on social media platforms now, and it is well known that Twitter and Instagram have horrible image compression, but it makes sense to use these platforms to test trends and image preferences.

The results revealed an obvious trend. People do not like the perfectly exposed high dynamic range photos that the “best cameras” captured.

There’s obviously a massive subjective component: people like brighter, warmer, punchy photos.

Computation photography and large image sensors capable of capturing copious amounts of detail and high dynamic range are still not a substitute for good glass, and an artistic understanding of photographic principals and composition.

In the end, you have to understand what your customers prefer, and craft visual strategies designed to appeal to those preferences.

Category : BriteWire Blog


Google shutting down Google+ following massive undisclosed user data exposure

PR teams at Google and Facebook have been working overtime this week due to breaches of user data on their networks. Google announced that it would shut down Google+ after it discovered a security vulnerability that exposed the private data of up to 500,000 users. Google+ is the company’s long-struggling answer to Facebook’s giant social network.

Google Shutting Down Google PlusIt really is not that much of a blow for Google to shutter Google+. Google admits that Google+ has “low usage and engagement” and that 90 percent of Google+ user sessions last less than five seconds.

It is Google’s lack of disclosure on the security breach that is causing waves in the cybersecurity community. There are rules in California and Europe that govern when a company must disclose a security episode.

Google did not tell its users about the security issue when it was found in March because it didn’t appear that anyone had gained access to user information, and the company’s “Privacy & Data Protection Office” decided it was not legally required to report it, the search giant said in a blog post.

Others are citing the leak as further evidence that the large technology platforms need more regulatory oversight.

“Monopolistic internet platforms like Google and Facebook are probably ‘too big to secure’ and are certainly ‘too big to trust’ blindly,” said Jeff Hauser, from the Centre for Economic and Policy Research.

Google+ had some innovative ideas, that just never caught on. At one time Google put significant effort into pushing the adoption of Google+, including using its data to personalize search results based on what a user’s connections have +1’d.

Thankfully I never invested heavily into Google+, but I did like how you could organize content into collections wich I alligned with market segments.

Google+ will shut down over a 10-month period, which is slated for completion by the end of August, 2019.

Google also announced a series of reforms to its privacy policies designed to give users more control on the amount of data they share with third-party app developers.

Users will now be able to have more “fine grained” control over the various aspects of their Google accounts that they grant to third-parties (ie calendar entries v Gmail), and Google will further limit third-parties’ access to email, SMS, contacts and phone logs.


DuckDuckGo Traffic Growth

The growth of DuckDuckGo is amazing. It took seven years for DuckDuckGo to reach 10 million searches in one day. It then took another two years to hit 20 million searches in a day. It took less than a year after that for DuckDuckGo to surpass 30 million searches in a day!

DuckDuckGo Traffic GrowthIf you are not familiar with the search engine DuckDuckGo yet, you will probably be hearing more about it from now on. DuckDuckGo is a search engine, like Google, Yahoo or Bing, but with DuckDuckGo your searches and your IP address are kept 100% anonymous. People seeking ways to reduce their digital footprint online is driving the rapid traffic growth at DuckDuckGo.com.

Compared to Google, DuckDuckGo is still tiny. At the time of this blog post, the record for daily searches at DuckDuckGo was 30,602,556. That makes it less than 1% the size of Google which handles over 3.5 billion searches per day.

However, the daily search volume on DuckDuckGo is now approximately one quarter of the daily search volume of Bing. Not bad for a small search startup with a funny name and only 40 employees.

Google continues to dwarf the search volume of the competition like DuckDuckGo, but continuing issues of censorship and data security issues are starting to cause many users to question whether they want to use Google.

This week Google admitted they were hacked and exposed the private data for as many as 500,000 people. As a result, they announced they will be shutting down Google+.

People are increasingly becoming aware that giant technology companies like Google and Facebook are making money off of their private information online. As more competitors to these established titans of technology begin educating users about how they are being tracked, alternatives like DuckDuckGo are gaining market share.


Artificial Neural Networks For Marketing

Artificial Neural Networks are progressive learning systems modeled after the human brain that continuously improve their function over time. Artificial neural networks can be effective in gathering and extracting the relevant information from big data, identify valuable trends, relationships and connections between the data, and then rely on the past outcomes and behaviors to help identify and implement the best marketing tactics and strategies.

Artificial Neural Networks For MarketingThe human brain consists of 100 billion cells called neurons. The neurons are connected together by synapses. If sufficient synaptic inputs to a neuron fire, that neuron will also fire. This process is called “thinking”.

Artificial neural networks are a form of computer program modeled after the way the human brain and nervous system works. It is not necessary to model the biological complexity of the human brain at a molecular level, just its higher level rules.

Common practical uses of Neural Networks are for character recognition, image classification, speech recognition, facial recognition, etc. They are also used for Predictive Analytics. Read our article on HOW LEADING TECHNOLOGY COMPANIES ARE USING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING.

Artificial Neural Networks(ANN) are a series of interconnected artificial processing neurons functioning in unison to achieve desirable outcomes.

Artificial neurons are elementary units in an artificial neural network. Each artificial neuron receives one or more inputs and sums them to produce an output. Think of artificial neurons as simple storage containers.

Artificial Neural Networks are comprised of three main parts: the input layer, the hidden layer, and the output layer. Note that you can have n hidden layers. The term “deep” learning implies multiple hidden layers. Each layer is a one dimensional array.

During processing each input is separately weighted, and the sum is passed through a non-linear function known as an activation function or transfer function to the next set of artificial neurons.

Using trial and error learning methods neural networks detect patterns existing within a data set ignoring data that is not significant, while emphasizing the data which is most influential.

From a marketing perspective, neural networks are a form of software tool used to assist in decision making. Neural networks are effective in gathering and extracting information from large data sources and identify the cause and effect within data. These neural nets through the process of learning, identify relationships and connections between databases. Once knowledge has been accumulated, neural networks can be relied on to provide generalizations and can apply past knowledge and learning to a variety of situations.

Neural networks help fulfill the role of marketing companies through effectively aiding in market segmentation and measurement of performance while reducing costs and improving accuracy. Due to their learning ability, flexibility, adaption and knowledge discovery, neural networks offer many advantages over traditional models. Neural networks can be used to assist in pattern classification, forecasting and marketing analysis.


Keras

KerasKeras is an open source neural network library written in Python. Keras was conceived to be an interface rather than a standalone machine-learning framework. It offers a higher-level, more intuitive set of abstractions that make it easy to develop deep learning models regardless of the computational backend used. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit or Theano.

Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible. It was developed as part of the research effort of project ONEIROS (Open-ended Neuro-Electronic Intelligent Robot Operating System), and its primary author and maintainer is François Chollet, a Google engineer.

Keras contains numerous implementations of commonly used neural network building blocks such as layers, objectives, activation functions, optimizers, and a host of tools to make working with image and text data easier.

Keras allows use of distributed training of deep learning models on clusters of Graphics Processing Units (GPU). Keras allows users to productize deep models on smartphones (iOS and Android), on the web, or on the Java Virtual Machine.

Keras Resources

Official Website: https://keras.io/


TensorFlow

TensorFlowTensorFlow was originally developed by the Google Brain Team within Google’s Machine Intelligence research organization for machine learning and deep neural networks research.

TensorFlow is a Python-friendly open source machine learning framework for numerical computation that makes acquiring data, training models, serving predictions, and refining future results easier. TensorFlow bundles together a slew of machine learning and deep learning(neural networking) models and algorithms.

TensorFlow computations are expressed as stateful dataflow graphs. The name TensorFlow derives from the operations that such neural networks perform on multidimensional data arrays. These arrays are referred to as “tensors”.

TensorFlow Diagram

TensorFlow is cross-platform. It runs on nearly everything: GPUs and CPUs—including mobile and embedded platforms—and even tensor processing units (TPUs), which are specialized hardware to do tensor math on.

Keras is a popular high level interface to TensorFlow.

TensorFlow Resources

Official Website: https://www.tensorflow.org/


How Leading Technology Companies Are Using Artificial Intelligence And Machine Learning

Artificial Intelligence looks for patterns, learns from experience, and predicts responses based on historical data. Artificial Intelligence is able to learn new things at incredible speeds. Artificial Intelligence can be used to accurately predict your behavior and preempt your requests.

Artificial Intelligence And Machine LearningArtificial Intelligence and Machine Learning are shaping many of the products and services you interact with every day. In future blog posts I will be discussing how Artificial Intelligence, Machine Learning, Neural Networks, and Predictive Analytics are being used by Marketers to achieve competitive advantage.

AI’s (Artificial Intelligence) ability to simulate human thinking means it can streamline our lives. It can preempt our needs and requests, making products and services more user friendly as machines learn our needs and figure out how to serve us better.

Here are how some of the top companies are using Artificial Intelligence.

Google

Google is investing heavily in Artificial Intelligence and Machine Learning. Google acquired the AI company DeepMind for the energy consumption, digital health and general purpose Artificial Intelligence programs. It is integrating it into many of its products and services. They are primarily using TensorFlow – an open source software library for high performance numerical computation. They are using Artificial Intelligence and pattern recognition to improve their core search services. Google is also using AI and machine learning for their facial recognition services, and for natural language processing to power their real-time language translation. Google Assistant uses Artificial Antelligence, as does the Google Home series of smart home products, like the Nest thermostat. Google is using a TensorFlow model in Gmail to understand the context of an email and predict likely replies. They call this feature “Smart Replies.” After acquiring more than 50 AI startups in 2015-16, this seems like only the beginning for Google’s AI agenda. You can learn more about Google’s AI projects here: ai.google/.

Amazon

Amazon has been investing heavily in Artificial Intelligence for over 20 years. Amazon’s approach to AI is called a “flywheel”. At Amazon, the flywheel approach keeps AI innovation humming along and encourages energy and knowledge to spread to other areas of the company. Amazon’s flywheel approach means that innovation around machine learning in one area of the company fuels the efforts of other teams. Artificial Intelligence and Machine learning (ML) algorithms drive many of their internal systems. Artificial Intelligence is also core to their customer experience – from Amazon.com’s recommendations engine that is used to analyze and predict your shopping patterns, to Echo powered by Alexa, and the path optimization in their fulfillment centers. Amazon’s mission is to share their Artificial Intellgience and Machine Learning capabilities as fully managed services, and put them into the hands of every developer and data scientist on Amazon Web Services(AWS). Learn more about Amazon Artificial Intelligence and Machine Learning.

Facebook

Facebook has come under fire for their widespread use of Artificial Intelligence analytics to target users for marketing and messaging purposes, but they remain committed to advancing the field of machine intelligence and are creating new technologies to give people better ways to communicate. They have also come under fire for not doing enough to moderate content on their platform. Billions of text posts, photos, and videos are uploaded to Facebook every day. It is impossible for human moderators to comprehensively sift through that much content. Facebook uses artificial intelligence to suggest photo tags, populate your newsfeed, and detect bots and fake users. A new system, codenamed “Rosetta,” helps teams at Facebook and Instagram identify text within images to better understand what their subject is and more easily classify them for search or to flag abusive content. Facebook’s Rosetta system scans over a billion images and video frames daily across multiple languages in real time. Learn more about Facebook AI Research. Facebook also has several Open Source Tools For Advancing The World’s AI.

Microsoft

Microsoft added Research and AI as their fourth silo alongside Office, Windows, and Cloud, with the stated goal of making broad-spectrum AI application more accessible and everyday machines more intelligent. Microsoft is integrating Artificial Intelligence into a broad range of Microsoft products and services. Cortana is powered by machine learning, allowing the virtual assistant to build insight and expertise over time. AI in Office 365 helps users expand their creativity, connect to relevant information, and surface new insights. Microsoft Dynamics 365 business applications that use Artificial Intelligence and Machine Learning to analyze data to improve your business processes and deliver predictive analytics. Bing is using advances in Artificial Intelligence to make it even easier to find what you’re looking for. Microsoft’s Azure Cloud Computing Services has a wide portfolio of AI productivity tools and services. Microsoft’s Machine Learning Studio is a powerfully simple browser-based, visual drag-and-drop authoring environment where no coding is necessary.

Apple

Apple is the most tight-lipped among top technology companies about their AI research. Siri was one of the first widely used examples of Artificial Intelligence used by consumers. Apple had a head start, but appears to have fallen behind their competitors. Apple’s Artificial Intelligence strategy continues to be focused on running workloads locally on devices, rather than running them on cloud-based resources like Google, Amazon, and Microsoft do. This is consistent with Apple’s stance on respecting User Privacy. Apple believes their approach has some advantages. They have a framework called Create ML that app makers can use to train AI models on Macs. Create ML is the Machine Learning framework used across Apple products, including Siri, Camera, and QuickType. Apple has also added Artificial Intelligence and Machine Learning to its Core ML software allowing developers to easily incorporate AI models into apps for iPhones and other Apple devices. It remains to be seen if Apple can get developers using the Create ML technology, but given the number of Apple devices consumers have, I expect they will get some traction with it.

These are just a few examples of how leading technology companies are using artificial intelligence to improve the products and services we use everyday.


Facebook Usage Declines

42% of Facebook users say they have taken a break from checking the platform for a period of several weeks or more. 26% say they have deleted the Facebook app from their cellphone.

42 Percent of Facebook Users Taking A BreakAccording to Pew Research Facebook users continue to reduce the amount of time they are spending on the platform. Just over half of Facebook users ages 18 and older (54%) say they have adjusted their privacy settings in the past 12 months, according to a new Pew Research Center survey. Around four-in-ten (42%) say they have taken a break from checking the platform for a period of several weeks or more, while around a quarter (26%) say they have deleted the Facebook app from their cellphone. All told, some 74% of Facebook users say they have taken at least one of these three actions in the past year.

The findings come from a Pew Research survey of U.S. adults conducted May 29, 2018 through June 11, 2018.

Younger Facebook Users Adjusting Privacy SettingsThere are, however, age differences in the share of Facebook users who have recently taken some of these actions. Most notably, 44% of younger users (those ages 18 to 29) say they have deleted the Facebook app from their phone in the past year, nearly four times the share of users ages 65 and older (12%) who have done so. Similarly, older users are much less likely to say they have adjusted their Facebook privacy settings in the past 12 months: Only a third of Facebook users 65 and older have done this, compared with 64% of younger users. In earlier research, Pew Research Center has found that a larger share of younger than older adults use Facebook. Still, similar shares of older and younger users have taken a break from Facebook for a period of several weeks or more.

42 percent of the audience not using the platform should translate to fewer daily active users. More than half of the audience changing the privacy settings should mean less opportunity for accurate ad targeting, and lower efficiency of advertising on Facebook.

Full Article at Pew Research.


Latent Semantic Indexing – Does It Help SEO?

Latent Semantic Indexing (LSI) and Latent Semantic Analysis (LSA) are techniques in natural language processing to analyze relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms.

Latent Semantic Indexing - Does It Help SEO?

I thought it might be helpful to explore the concept of Latent Semantic Indexing and its impact on Search Engine Optimization(SEO). If you research Search Enginge Optimization Techniques you are likely to come across some articles on Latent Semantic Indexing (LSI) or Latent Semantic Analysis (LSA). There is some debate on the effectiveness of creating content designed to appeal to LSI algorithms to improve organic search results placement. My position is that by understanding LSI / LSA you are able to create better content, regardless of whether it improves organic search results on Google, Bing, etc.

What Is Latent Semantic Indexing?

In simple terms, Latent Semantic Indexing allows a search engine to include pages with the text “Home For Sale” in the search results for the search term “House For Sale.” That is because “Home” and “House” are related terms. LSI is a technique that reveals underlying concepts of content by means of synonyms and related words. It is basically using data correlation to find related terms. (see article: A Primer On Data Correlation For Marketers). Latent Semantic Indexing finds hidden (latent) relationships between words (semantics) in order to improve information understanding (indexing).

Latent Semantic Indexing As SEO Strategy

It has been suggested that using synonyms and related terms in optimized website content could help a search engine better understand the content and rank it higher in search results. The idea is to take your target keywords or search term and indentify a list of “LSI Keywords” to include in the content.

Search engines attempt to understand the context of any piece of content they index. The field of semantics (the study of meaning in language) is a fundamental part of this approach.

While it is believed that search engines relied heavily on LSI in the early stages of their development, there is no evidence that the Google and Bing rely heavily on it now.

Why Condisder LSI As Part Of Your Content Strategy?

My opinion is that you should focus on creating good quality content on website vs. trying to game Google or Bing with optimization techniques. As part of my content strategies, I use a basic understand of Latent Semantic Analysis to create better content. Different people may use different terms to describe the same topic, and by including these related terms it can make the content more relevent to the reader. I also use LSA to make sure I am not overusing the primary keywords or search term. I use LSA to come up with derivative articles that are related to the primary article. An effective SEO strategy should also include relevant backlinks, relevant alt tags, etc.

In summary, I am not convinced that using Latent Semantic Analysis will improve your search rankings, but I feel it can improve your content.


Why Open Source?

Why Open SourceThere are variety of good reasons to use open source technology when creating solutions for your business.

Lower total cost of ownership (TCO): Using open source software yields a lower total cost of ownership when compared to closed source and proprietary alternatives.

Shift developers from low-value work to high-value work: The easy problems have already been solved by open source solutions. Operating systems, web servers, content management systems, and databases are all problems with established market leading open source solutions.

Modularity & Flexibility of key components: Proprietary software solutions tend to be monolithic, and you are not allowed to change how they function, or to add features that you need. Proprietary software locks users to a particular vendor, or “platform”. Open source projects tend to be more modularly architected, improving both the flexibility, and the robustness of the code. Open source solutions are typically leaner and more agile. Since you have access to the source code, you can often apply fixes or add features, both large and small, at your own convenience, not at the convenience of the publishing organization’s release cycle.

Secure & Transparent: Empirically, open source tends to produce better quality software than its proprietary or alternative counterparts. With closed source software, the only developers that can potentially detect, diagnose, triage, and resolve software bugs are those that happen to be employed by the company that publishes the software. Open source provides three advantages: first, you have the opportunity to tap the knowledge of the world’s best developers, not just those on one organization’s payroll. Second, the number of potentially contributing developers and thus the potential knowledge pool is orders of magnitude larger. Finally, open source software gets adapted to a variety of use cases, not just the one the publisher originally intended, surfacing bugs and edge cases much more rapidly than traditional, predictive QA processes.