There’s been a growing trend to get rid of middle management, which coincides with the belief that AI and software robots can automate work done by human professionals. So, fewer managers are needed and, therefore, less human resources management. But when we have more machines to work, they also need to be managed. Consequently, we probably need digital counterparts of middle management and human resources.
A recent podcast has an interesting discussion about micro tasks to analyze data and make micro-predictions, such as analysing a specific dataset from one source and trying to conclude something from it. This analysis doesn’t try to understand or optimize a more significant problem or task. It just focuses on one specific part. A more extensive system can have dozens of components like that.
Then there is another layer to combine output from those micro-tasks. It can then combine the output and conclusions from several micro-AI modules—individual micro AI’s focus to model and explain one specific data set.
For example, running shoe data (yes, there are already running shoes that collect all kinds of data) of your cadence, stride length, ground contact time and foot strike angle to optimize your running speed. When you think about your running performance as a whole, this is only one part. You must also think about heart rate, energy levels (blood glucose), readiness (have you slept enough) and many other things. But it would be too complex to build one huge AI to optimize all this data, and it is better to have modules for each need and then another layer to combine all this.
It is the same with software robots. One robot can transfer inventory numbers at the end of the month from SAP to your accounting system. To produce monthly financial statements and reports is much more work than simply compiling those inventory numbers. Other robots could perform the individual tasks and some higher-level robots to put all this information together.
This is nothing new as such. Modularity has been an essential principle in designing software for decades. With AI and automation, we often talk about extensive and complex solutions linked to many tasks and systems around an enterprise. When these are also relatively new areas, each company and project usually try to build large systems that try to make a perfect solution for a significant process.
When we have these micro modules to handle a specific need, we can then develop design principles. Not to implement from scratch, but to find the best components to do micro-tasks, optimize their use and then get them to work together. It is a kind of HR and management function. You must find the best resources to do things you need, and then you must manage them. But these management layers are digital, i.e., algorithms that choose the best algorithm for each micro-need and optimally use them. Algorithms manage algorithms.
This also changes the ecosystem and business models for AI and automation. You have, for example, the following business areas:
Sometimes it is good to compare technology and machines to models of how human beings perform jobs. Especially when AI and automation are to perform tasks that humans have previously done. People have anyway used centuries to develop models, how to organize tasks in organizations. It doesn’t mean we can or should copy the same models to machines, but it can give us ideas on how best to use machines. There are reasons why people specialize in certain areas, how different professionals work together and how the management layer must optimize resources. We need to solve similar issues when designing, using, and managing algorithms, machines, and digital processes.
The article first appeared on Disruptive.Asia.
Photo source: Wikipedia.
Numerous cities around the world want to become ‘smart’ cities. One main objective of smart cities is to collect data to improve and develop services. As a result, many vendors are also keen to get to the smart city business. These projects are network, infrastructure and big data intensive. So how does this benefit ordinary people? Any value to individuals and their privacy seem to have a lower priority, although the ultimate target should surely be to improve the lives of residents.
Smart city concepts started to trend some years ago and are increasing in popularity. 5G and Edge also are seen as essential technology boosts for those projects, and that’s why network vendors and carriers are involved in most projects. Smart cities are seen as a good reason to build technology infrastructure to collect, transfer and analyze all that data.
Cities target to collect data and analyze it to optimize services and operations for many purposes, such as traffic management, public transportation, power consumption and production, water supply, waste collection, crime reduction, healthcare and community services. Environmental aspects are also becoming more critical. Air quality, noise pollution and consumption of energy are other areas cities want to improve.
This all sounds great, but as we know from many other technology projects, it’s very different to focusing on the development of services for individuals, the user experience, and their unique needs and values. Beyond that, privacy and data protection are now critical issues in these kinds of huge data projects. At worst, smart city infrastructure resembles a real ‘big brother’ scenario.
It is possible to build smart cities that serve individuals better, but it would require parties to develop services from a consumer’s perspective. The concept could help people get better services, optimize their movements, live healthier lives, save time and money and improve the quality of life in many ways. Ten years ago, we had to rely on mobile app developers to provide useful apps to individuals because carriers and network vendors were not able or motivated to do it.
Many services would also become more valuable if we were able to combine personal and public data. Your movements combined with traffic and public transportation data, air quality data with your daily walking and running routes, and your personal habits with daily energy consumption peaks are just some examples. Together, the two data sources could create value for the individual and society.
This could be achieved if individuals had access to public data combined with their own personal data. In this way, privacy could be respected and preserved. But if public services start to surveil individual people, we immediately encounter data protection and privacy risks. It would also lead to a model that cities, authorities and service providers would plan what they think is suitable for individuals, not to offer tools for individuals to improve their own lives.
For city authorities, infrastructure vendors and carriers that dominate projects, it is not easy or conducive for them to build systems from an individual’s point of view. Of course, politicians in the city councils should be thinking of the residents they represent, but it’s not enough. We also need technology solutions and vendors that focus on building solutions and services for individuals.
This would likely involve an additional layer for the services. Maybe something similar to app stores made for mobile apps that also enable users to protect their privacy and manage their personal data. It could also empower many other parties to develop services for residents and give them the power to decide what services they want to use. The best services are hardly ever developed by authorities and big tech companies deciding on what the individual wants.
Smart cities should be focused more on the needs of residents. There are many ‘nice’ and ambitious plans to make cities and the lives of residents better, but nice plans are never enough. The real questions are who are the actual customers, who can decide which services to use and who will control the data. To make these services beneficial for people, the concepts, technology, architecture, data and business models should be designed to empower people, not just to surveil and control them.
Non-fungible tokens (NFTs), have gathered a lot of interest recently. They certify digital assets, including millions of dollars of digital art pieces. Christie’s has already sold an NFT work of art by Beeple. Of course, it raises the question, is this something more concrete than Initial Coin Offerings or ICOs in 2017.
NFTs are digital certificates on a digital ledger, or blockchain, that proves a digital asset to be unique and therefore not interchangeable. NFTs are used to represent and certify photos, videos, audio and other types of digital files. Art is currently getting all the publicity, but NFTs can certify many other items, including text, software code or even Twitter tweets. The fundamental idea is that a digital object can be tokenized, and it becomes unique in that way. It is, in principle, not possible to make further copies of it.
Some people have commented that the value and irony of NFTs is that although their name is non-fungible, they are easily fungible. They can be unique, but it is easy to trade them. And as we know, things can have value and liquidity if there is enough demand and supply and transactions costs are low enough.
Many of us can still remember the 2017 ICO boom when companies started to offer their own tokens. Typically, they were startups (or not even startups but startup ideas) with business plans (called white papers). They included a token as an important component of their business plans and then started to sell those tokens. Some projects were able to collect significant money and, in rare cases, built a long term business. Many people participated in ICOs to learn how to buy a token, not thinking of its ROI. Some people had many bitcoins, and some had a hard time selling them (because they had no idea how they had acquired them) and wanted to diversify to other tokens.
A fundamental difference between NFTs and ICOs is that ICO tokens usually represent only some future promises. NFTs represent assets, especially digital assets. In that way, buyers can evaluate how they see the value of their assets. It is always complex to evaluate the value of art, and NFT art has precisely the same challenges. Then there are many other digital items like pieces of music, virtual items in games and software components that can have an NFT.
There are also plans to expand the NFT concept from virtual and digital items. There could also be digital certificates to represent physical items, for example, a certificate to prove real estate ownership. This part requires a legal framework that enables the use of this kind of digital certificate.
NFTs have also generated crowdfunding plans. People and companies could sell fractions of their work, for example, music, movies or software. NFTs can make this market more effective, but it doesn’t remove all crowdfunding challenges, especially how to find the correct value and then make the secondary market liquid. It is also good to remember the model can work for some items that have enough supply and demand, but it doesn’t mean NFTs alone guarantee them for any item.
There are several new business plan ideas based on NFTs. For example, if software is published as an NFT, there could be a new GitHub, especially for NFT software. Companies and individuals could start to license data as NFT packages, and media companies could also offer NFT content.
Ethereum, which is based on the proof-of-stake model, is the most commonly used solution for NFTs. Blockchain still has fundamental questions around which solutions have a long term future and value. When blockchain software is updated, and a fork created, backward compatibility is an important question. A soft fork means a new version is backward compatible, and a hard fork means it is not. If a new version is not backward compatible, then old tokens won’t work in the new system. In the end, it is the community of each token that can decide which updates and forks take place. The fundamental question for each blockchain solution is its future backward compatibility. At the moment, Ethereum looks like a safe bet to implement blockchain-based solutions. With lesser-known blockchains, it is harder to predict their future.
The NFT concept is more concrete and makes it easier to evaluate items than ICOs did. But in the end, an NFT’s value depends on the underlying items, so it is impossible to say if an NFT as such represents something valuable or only empty promises. NFT is an excellent model to manage and trade the value of digital items. But it is crucial to remember that an NFT alone doesn’t create value for a digital item. The items must have value, and NFTs help to make the value tangible.
The article first appeared on Disruptive.Asia.
The COVID-19 pandemic has been significant for wearable devices. They have helped to detect early COVID-19 symptoms, and they have also helped people live healthier lives and take care of their wellbeing during the pandemic. The last 18 months have been a good time for many digital services, from video conferences to food delivery apps. Maybe it will permanently change how we manage our wellness and health and help mobile healthcare become mainstream.
Higher resting heart rate and body temperature are early signs of COVID. For example, research institutes and universities have developed software to use Oura ring data to detect these early symptoms. Employers have also bought wearable devices for employees to detect early symptoms and warn them not to work if there are warning signs. This is the case in companies from customer and health care services to professional sports teams.
Wearable manufacturers have reported that, based on their data, the COVID situation has also helped some people sleep better. The reason might be that people don’t need to hurry to work and take the kids to school in the morning. But we have also seen, as the situation continues, more people feel stress, i.e. based on data, have a higher heart rate (HR) and don’t sleep as well.
The situation has also changed exercising habits. People don’t walk to work or take public transportation, no daily breaks to go for lunch or coffee. Health professionals are worried people are sitting too much during the pandemic. Others have started to exercise more, not daily walks but daily runs. This has resulted in more sports injuries.
All this has prompted people to monitor their daily wellness and health data. People have also hesitated to see a doctor or go to the hospital but monitored their health with a smartwatch to measure heart rate or ECG (electrocardiogram). And if you have a Zoom call with your doctor, it is actually useful that you have that data at hand (so to speak).
This all demonstrates that people have started to use more of these devices and are getting more data, but it’s not that simple. What should I interpret from my heart rate or heart rate variability? Do I exercise too little or too much? Is my sleep quality and exercising linked to each other? What is the data combination that really indicates some illness?
When people get more data, it doesn’t mean they suddenly become health, sleep, diet and wellness experts. Some people might feel so when they Google health care instructions, but it can make things worse. This data can be beneficial for health and wellness monitoring, but it needs better software to analyze it or make it available for professionals.
Mobile healthcare has been a hot topic for years, but COVID time has really brought it to the fore. Healthcare organizations tend to be rather conservative in taking on new things, but this period has forced them to find new solutions quickly. I know many mobile healthcare startups that have struggled for years. One big problem has been that healthcare organizations move slowly, making them difficult customers for agile startups. The other problem is getting access to reliable and accurate data. Many of those companies have offered solutions to transfer data to a doctor or hospital, but often people have to capture the data themselves, e.g. measure their glucose, blood pressure, heart rate and enter it into an app. Some people find this challenging, and others are just too lazy to do it. And there are those who might want to ‘fix’ their own numbers to either avoid embarrassment or show off.
So, now we have more data, and we have solutions to transfer the data. But we still have a couple of problems: 1) privacy and data security for sensitive wellness data, and 2) more systematic models to utilize data, not only from one but several wearable devices. This means we need solutions to collect and combine data from several devices, combine that data and at the same time protect privacy. It would also help if this data could be combined in the future with other health care data like health history.
With new technology and concepts, it typically takes years to make the breakthrough. It often also needs some special triggers to get things to happen. I remember the first great mobile health tech visions 20 years ago with 3G hype. Now it looks like the pandemic has helped us over some major obstacles, and the wearable market has also developed rapidly. We should now see rapid and significant development with more applications and services using wellness data more effectively, with a subsequent boost to mobile healthcare.
The article first appeared on Disruptive.Asia.
It’s not expensive to buy a spy, according to a recent article. You can ‘buy’ a spy for $10,000 a year, or in more significant cases, you may need to pay $40,000 to $70,000, especially if the spy takes a considerable risk. There are other motives for people wanting to sell or give information, not just for military and international politics secrets. Human beings are a significant security risk for businesses. Can we do something to improve this weakest link?
Colonel Vladimir Vetrov was one of the most important spies during the cold war. He worked for the KGB and leaked more than 3,000 pages of documents to French intelligence, including the names of more than 400 Soviet operating agents. He operated from 1981-82, and it is said his information went direct to President Reagan. He played an essential role in exposing weaknesses in the Soviet Union, its dependence on stealing western technology and how an accelerating arms race was driving it to collapse.
However, it seems Colonel Vetrov didn’t do this for the money. He received some small gifts that he gave to his mistress, but nothing significant. He was more embittered with his career development at the KGB and also frustrated by the Soviet system. Several studies and cases demonstrate that embitterment is often a more important motive for spies to leak information than simple greed.
Edward Snowden leaked highly classified information. His motivation was not clear, and now that he is now in Russia, he has indicated he was unhappy that the US authorities spied on its own people. Wikileaks also received leaked information from other people working in governmental agencies.
Governments and enterprises spend a lot of money developing better solutions for physical and cybersecurity that are becoming increasingly significant. And these investments are definitely needed. But at the same time, it is important to remember; it’s people that leak information and create holes in even the most sophisticated systems.
I have personally seen cases of spying or information leaking during my career. Once, a person at a customer leaked information from our competitors and how some people in the organization worked with the other vendors because he was not happy about his position. In another case, a company warned us that a cleaner in our project office had collected documents and photos from our bid documents. In one extreme case, someone set off a fire alarm in an office, and several laptops of a new project team went missing. All these are old cases.
The question is, who can you trust? It is not an easy question to answer, and it is not black and white. Even the most loyal person can change and start to leak information. We could also say that no one is totally reliable; most people reveal information at some point, either intentionally or unintentionally.
One solution is to keep people loyal. A good salary helps, but even more important is to make people feel they are being treated fairly. Companies try to identify problems to keep their employees loyal and reliable, but it is rarely enough.
That raises the question as to what information is relevant. Many companies hide information that is not very relevant to anyone, competitors or customers. And those parties can usually get that information quite easily, so it is not a good investment to try to hide it at a high cost. It can also increase the risks of leaks if employees feel that irrelevant information is being classified as secret.
There are technology solutions to avoid, identify and reveal spying and information leaks. For example, one old method is to make each copy of the information (e.g. a document) unique in order to identify whose document was leaked. It is also important to track who has copied some confidential information or had access to a system. There are other solutions, e.g. identifying unusual behavior, setting test traps or monitoring communications.
It doesn’t make any sense for companies to take similar measures as critical governmental agencies if it creates ‘bad spirit’ in the organization. One big risk area nowadays is employees using their own devices and personal communication tools. Several simple solutions make sense.
Suppose sensitive discussions between business partners preparing a bid, between a company and its law firm, or amongst board members take place via a messaging app, a Facebook group or another similar service. In that case, it increases the risk of inadvertently sharing information with other parties. Sometimes it can happen accidentally, especially when people are handling multiple groups and discussions simultaneously. It is not realistic in many of these cases to force people to use higher security tools which can be challenging to enforce between organizations. Most security tools have been designed for use within an organization.
Technology is not the only solution to stop people from leaking confidential information. But technology can help to avoid accidental sharing, easy leaking and identify the sources of leaks. These solutions must be easy to use, and they must work with commercial off-the-shelf (COTS) technologies and services. They can help keep information in closed groups, prevent direct sharing, and identify if someone has shared confidential information.
Security and trust in people is not black or white, more like shades of grey. There will always be people who want to spy and leak information, whatever it takes. But for the majority, it probably helps to have clear rules, better tools and increase the risk of getting caught. Any company that invests in building security in its physical and cyber environments must also think about building and monitoring trust with its people.
We are all probably skeptical about people who tell us what we should do because they think it’s what is best for us. A good example is adults telling kids and teenagers what to do and not to do to protect them. Apple and Google are doing something similar with privacy. They want to be consumers’ parent to protect their privacy, but they want to keep control. Do consumers really want this, or would they like to control what to do with and where to use their own data? Here lies an opportunity for a new data business.
Apple is introducing new models in the latest iOS versions for users to control the trackers of mobile apps. Basically, a user must allow apps to follow them around the web, collect data and target other apps. Not surprisingly, there are estimates that around 70% of people, if asked, would not allow tracking of this type so that Apple may be onto something. However, this also increases Apple’s control of the ecosystem and makes it even more a closed-garden system by giving Apple control over what app vendors can do and how they do it.
This would have an impact on other companies like Facebook and Tencent, which operate online advertising. Facebook has already warned, this would affect its revenue. Tencent and other Chinese mobile internet companies have developed workarounds for the model.
Google’s Chrome will shortly stop supporting third-party cookies, making it harder to track users on the web. Simultaneously, Google is preparing new solutions to track the browsing history and profile and segment users, enabling advertisers to target ads better. This is coming from Google’s Privacy Sandbox project and gives Google a more critical role in the advertising ecosystem, making it harder for smaller ad companies and advertisers to work independently.
Privacy and user tracking resemble something from the ‘wild west’. It becomes more complex when a few companies can control a significant part of the internet and mobile ecosystems. This may be specifically about web tracking and ad targeting, but Apple’s Health app collects data from wearable apps and enables downloading of health records, and Google Fit aims to do the same.
All this opens the opportunity for a new unholy alliance between consumers and enterprises. Consumers could share their profiles direct with businesses and bypass the internet companies if they could see concrete benefits. This is not a new idea, but it needs easy solutions to become a reality. It is unlikely consumers will do something just for better privacy; they will want to see those benefits quickly.
Let’s take a few examples of what this business and consumer cooperation could mean:
These are a few examples of how users can have a direct data relationship without the internet and mobile giants trying to control it. But consumers will need tools to collect their data and share profiles (not raw data). It can’t be something each individual negotiates with enterprises who would dominate, and consumers wouldn’t know the right price to demand. Consumers need weapons (i.e. tools and models) to do this properly. Ideally, this would be an open ecosystem with open source tools, open APIs in an open environment where different parties and developers could provide the means for consumers to keep their data.
All this opens the door to new technology and companies to offer solutions for consumers and enterprises. Could this be the most significant change in the data business since the early days of the internet? Regulators could also accelerate this development by introducing new privacy rules, giving more power to consumers to control their data and restricting the internet giants’ dominating market position.
Current privacy and data discussions and developments can confusing. Even though parties exist that want to protect consumers, they often add restrictions that make their lives more complex, particularly if they continuously need to click approvals. At the same time, data analytics offer more opportunities to consumers and businesses alike to better utilize data for better services, better prices, and make lives easier. The motives of some ‘protectors’ are not very clear and maybe not as ‘innocent’ or as ‘honorable’ as they might appear. There also lies the possibility of ‘data dominance’ simply moving from one actor to another.
Long term solutions for data and privacy cannot be based on the controls and restrictions of a few big companies. Consumers must be able to control and utilize their data. All kinds of companies must also be able to use data if they can offer value to consumers. Otherwise, not only advertising but many other areas, including health and finance services, could also end up in the control of the internet giants.
The article was first published on Disruptive.Asia.
Data is the basis for many operations, but it doesn’t mean data is always reliable. Things can get complicated when you don’t know which data source is reliable and which is not. But we must use data all the time. Sometimes it is possible to increase the accuracy, but the more meaningful solution is to build a software layer to correct data before using it.
I earlier wrote about known and unknown things and data points. The reality is even more complex. We know some data is relevant, and it is available, but we don’t always know how reliable it is. We all know about opinion polls and their error margins. It is just one example, but uncertainty is linked to all data sources and models that utilize data.
In aeroplanes or nuclear power stations, the core systems do not necessarily trust individual sensors or data sources. There can be many reasons why a particular sensor gives incorrect data. For example, a pitot tube that measures an aircraft’s airspeed can transmit incorrect information if frozen, which has caused several plane crashes. Today, a plane typically has several pitot tubes, and the software tries to draw conclusions and give pilots warnings if one or more give inconsistent readings.
Sometimes the situation is more demanding when it is difficult, even impossible, to know if data sources and sensors give accurate data and how large the error margin is. Examples of this are wearable devices. They can measure your exercise patterns, sleep, and body functions like heart rate, temperature or blood pressure. These devices are calibrated using higher accuracy devices during development. But it is still hard to say how accurate they are for different people in different situations. For example, even with top-level research instruments, it is not easy to measure how much light sleep, REM, and deep sleep a person has at night.
We might also have a situation where we have many sensors, but some data might be missing. It is a complex task to combine data from different sources, and it is also tricky to know if available data makes any sense combined. This can occur when having many IoT sensors or an organization’s internal data from multiple sources to measure processes or even financials.
It is often said that intelligence makes up only 20% of AI implementations, and the rest is getting data, combining it and correcting errors. This layer is often underestimated. I have seen projects where 95% of the data is inaccurate, incorrect, or missing data points.
There are several ways to increase the accuracy of data, for example:
These layers combine, correct and smartly use data and become more important as we get more data sources. One could even say it is pretty simple to create AI models if someone has developed this layer to make reliable data available. It is often said that IoT business is not really to sell sensor hardware but to manage data, but what is ignored many times is the critical question of getting reliable data.
It is not easy to make these layers that combine data because each source is different, and it can also require an understanding of the data to be able to analyze and integrate data sources. It is possible to make general models and tools for this, but they often need tailoring for the different data sources and combinations of data sources.
With AI’s hands, these smart data combining models and layers become a vital part of the data and AI business. Data is valuable only if it is reliable. We can trust AI only if it can use correct data. The reality is that no data source is 100% reliable, so we need intelligence, how to correctly and optimally use data sources.
The article was originally published on Disruptive.Asia.
It has become popular to manage operations with data. New tools to collect and analyze data are continually appearing. But things can still go badly wrong with data. Dashboards and analytics apps rely on multiple assumptions, and if external factors change, models don’t work anymore. COVID-19 activities in many countries have been good examples of this. When you don’t know all the elements, individual numbers can be misleading. You can never manage only by the numbers you have; you must have insight, understand the environment, and be ready to look for changes outside the numbers.
Some years back, US Defense Secretary Donald Rumsfeld made the famous statement about known unknowns and unknown unknowns. Many people laughed at that statement, but it is quite an excellent way to describe reality, not only in war and foreign politics but also in business environments. Many companies focus on things they know, but they are not prepared to handle external factors they are unaware of.
Many countries specified data-based recommendations and rules during the current pandemic, e.g. tier systems to close shops, services, restaurants, or varied travel restrictions. They are based on the number of cases per 100,000 inhabitants, people in hospitals, or the virus’s R-value (reproduction rate). But most governments have been forced to change those rules and thresholds many times. It has given reason for citizens and opposition parties to criticize their actions. The reality is that it is challenging and not very intelligent to manage only by a set of numbers if many factors remain unknown. This scenario makes sense to change rules and metrics as we learn more about the situation.
Many companies focus on optimizing their operations based on the numbers they follow and measure. It is precisely why a disruption in an industry lands an incumbent company in trouble when they focus on numbers in the domain they know and recognize. Still, disruption often changes factors that they don’t monitor or are unknown to them.
The most famous examples involved mainframe computer companies when personal computers came out; for Nokia, when Apple introduced the iPhone and print publishers when online content came. Those companies focused on optimizing their operations, products and metrics in the existing business and environment. For example, Nokia was optimizing the production costs of phones, model ranges for different customer segments and their software features. The iPhone looked too expensive based on their metrics, not suitable for many customer segments and had too few features for users.
Known unknowns and unknown unknowns are relevant categories for businesses to analyze in more detail. Known unknowns are factors they know about but cannot get details or data on. For example, competitor’s future products, economic growth and the future availability of components.
Unknown unknowns include factors that we don’t know at all but that are likely to impact us. The pandemic, for example, came as a surprise, and we had no idea what impact it would have on our lives and businesses. There are many factors we don’t know about or can even imagine, but they might have a lot of impact on us.
We can also think of one more category, unknown knowns. It would mean things we know, but we don’t recognize how they impact us. For example, using available data but never thinking it will be relevant, like a company being aware of climate change data but not recognizing it as a factor in their business.
However, if we only focus on the ‘known knowns’, we can still be surprised when something changes and still not understand the real reasons for it. Many businesses and people focus only on the ‘known knowns’ and try to understand and explain everything based on that, but then they miss those three other areas discussed above, and external factors surprise them, or they reach wrong conclusions if they don’t understand that things outside their focus have an impact.
Can we do something to handle unknown areas better? Maybe the most important thing is not to think you know and understand everything, and that you must keep your eyes open for other things too. We can assume at least three categories of actions that you can do better with data and metrics:
The COVID-19 pandemic and its impact on governments, businesses and individuals has taught us how companies can and need to be better prepared for unexpected events. Some of them can be big significant global events, some smaller ones like why some sales go down and why people are no longer interested in a particular product.
Maybe the most important things to remember are: 1) you cannot know it all; 2) your models are not perfect, and 3) when something unexpected happens, don’t think you can explain or handle it using old models only. As the famously coined Darwin quote goes, “the species that survives is the one that is able best to adapt and adjust to the changing environment in which it finds itself.”
UK’s supreme court recently made a ruling classifying Uber’s drivers as workers entitled to rights such as minimum wage and holiday pay. Uber has a long history of legal battles. It has had many fights against taxi regulations, who can offer rides and how. But the struggle over the rights of its drivers is even more fundamental. It hits at the heart of its operating model and cost structure and it’s a good example of disruption versus regulation.
The court ruling says that drivers not only work when they are on a trip but includes the time they are logged into Uber’s app. Amongst other things, Uber now needs to start a pension scheme for these drivers. It significantly increases Uber’s costs, and it fundamentally changes the idea that drivers are ‘entrepreneurs’ who get customers through Uber’s app. Californians voted on a similar question for their workers’ law in November. Uber spent a lot of money to get support for its model, and the results were that the drivers could continue as independent contractors in that state.
Employee rights is only one area where new business models and disruptive startups encounter issues with old regulations. Digital services could and should be global, but financial services and fintech are examples where regulation significantly restricts how and to whom services can be offered. In fintech, we see many restrictions, including what services are not allowed to users. Regulations are supposed to protect citizens, but they also safeguard companies using old models to continue business. Is that fair on people who ask for freedom, not protection?
Regulation is one of the main reasons why fintech services are slow to acquire market share. It doesn’t just limit how services are offered; it also makes it more expensive to provide services. In the global sense, it is hard to understand why you can only use services in your own country. Why is it that you can travel to another country to get financial advisory services and make investments, but you cannot, in many cases, use those services online or talk on Zoom with your advisor in another country?
There are many other examples of how startups and new business models collide with old regulations. And it is not always the regulations. For example, labor unions or incumbent companies push to introduce new laws and regulations to protect their position. Taxi companies, taxi driver unions and banks are famous for utilizing laws and their lobbying power against newcomers. None of them have a reputation as model citizens or focused on offering their customers the best service.
In most cases, the arguments against newcomers are justified with good intentions such as protecting customers, employees and ensuring fair competition. It is never easy to say what is right and wrong, and the best way to protect someone. Still, it would be more honest to say that in most of these cases, the question is not really about the protection of customers, employees and competition, but about the fight between old and new models.
Many people want to drive for Uber, and similar services, as independent contractors and have their freedom to do other things, too. Then some people like to have more permanent employment and get paid holidays. Many people would like to use new fintech services and global financial services, and then some people just want to walk to their local bank branch and send checks by mail.
As a result, societies become fragmented. It is tough to have one model fit all, but regulation forces one model that everyone is made to follow. That’s OK and easy to understand if the objective is to protect all people. But if it concerns people who do not want that protection and it causes no harm to other people, it is harder to justify. Of course, there are always arguments about indirect impact, e.g. how the competitive environment is shaped.
Let’s be honest; many of these questions are political. They are about conservatism versus the freedom of individuals and businesses. Some of them are also about negative and positive freedom models, i.e. whether a system allows something and offers equal opportunities to different parties. Anyway, a kind of reality in business is that the most efficient model will win eventually, assuming lawmakers don’t restrict people’s freedom by limiting the choice of services they are allowed to use.
The article first appeared on Disruptive.Asia.
People are living and working more and more in digital environments. COVID-19 has accelerated the transition to more virtual and digital interactions. Security is a concern in many services. But part of the problem is that security experts, companies addressing customer concerns and even governments focus on negative messages and want to offer restrictions and hard to use tools instead of focusing on opportunities and making the internet a more trusted environment. The thinking is often too technical and theoretical, not based on human behavior or user experience.
Trust is a fundamental basis for societies and businesses. Countries where people trust each other typically work better than countries with shallow trust. It is hard to make a country or city safer just by adding more police officers or restrictions. If business parties cannot trust each other, they just try to focus on short term quick wins and don’t want to create long term commitments and investments.
We have the same situation in the digital environment, but many parties still believe that added restrictions, more policing tools, and trendy, trustless transaction solutions would make it better. We can see this on many levels. In many companies, security officers and experts tell us what must not be done, how risky everything is and creating all kinds of rules for the organization. Governments also sometimes adopt very simplified models to use. Some countries even restrict what people can see and do on the internet. But even the US and UK want to move to more populist models like forbidding end-to-end encryption in the fight against terrorism or protecting children. Of course, it is a totally unrealistic request and doesn’t do much to make the internet a safer or better place.
We all know how complex it can be using digital banking apps, identification and signing services. These are usually built from a very technical perspective, making something technically bullet-proof. Still, they are not lazy-user-proof when users don’t use the service or forget the security recommendations while using the service.
The Financial Times organized its annual European Financial Forum in early February, and one crucial topic was digital finance services. Several speakers emphasized digital trust as a critical component for developing digital services. Nowadays, many things are done online, with email and messaging services, video calls and digital signatures. If parties cannot trust each other, it is quite impossible to conduct digital business.
Facebook deletes billions of fake profiles annually, we all get loads of suspicious emails daily, and companies create bots and fake profiles on LinkedIn just to generate contacts to sell more. Companies use solutions to secure communications and information sharing internally. Still, more and more business is being done across organizations, and most often, email, Zoom and WhatsApp are the typical tools, simply because they are the easiest to use.
It is quite evident that better trust solutions are needed. But they should be built on natural human behavior and somehow generate trust built up over generations in societies and communities. Cryptography experts cannot create digital trust.
Typically, trust is built up step by step with human interaction. You may be in the same class in school, study together at a university, work together, or live in the same neighbourhood or have the same hobbies. Or you know someone you trust, and they introduce you to someone else, and you immediately trust them by inference. Trust is not black and white. You build it over time, it depends on the context, and you can lose trust quickly. And trust is not based on a set of rules and restrictions; it is based primarily on positive experiences with someone.
We are stepping into a new era of digital trust. Then pandemic has accelerated the need to do this. We need new solutions to build and manage digital trust, and they will need to include both social and technical innovations. And they will also need to work with our daily digital tools, like email, chat, video calls, and data sharing. As trust in society is based on positive experiences and opportunities, we need digital trust tools based on positive experiences, mutual learning and finding more opportunities.
The article first appeared on Disruptive.Asia.
Est. 2009 Grow VC Group is building truly global digital businesses. The focus is especially on digitization, data and fintech services. We have very hands-on approach to build businesses and we always want to make them global, scale-up and have the real entrepreneurial spirit.
Research Report 1/2018: Distributed Technologies - Changing Finance and the Internet
Research Report 1/2017: Machines, Asia And Fintech:
Rise of Globalization and
Protectionism as a
Fintech Hybrid Finance Whitepaper
Fintech And Digital Finance Insight & Vision Whitepaper
Learn More About Our Companies: