Alibaba Cloud vs. Amazon Web Services, which is the cheapest?

Recently established in Europe, the cloud subsidiary of China’s e-commerce giant intends to compete with AWS. But is it competitive?

When it comes to pricing and public cloud, it’s common to compare the three cloud giants Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform. A fourth actor intends to disturb this tiercé de tête. As in other fields and especially telecoms, it is a Chinese actor. In this case Alibaba Cloud. With its cloud division created in 2009, the e-commerce giant is trying to replicate the success of its US competitors. With some success. According to IDC , Alibaba Cloud ranked in 2017 third in the public IaaS market behind Amazon and Microsoft, and ahead of Google and IBM, with more than $ 1 billion in revenueand 4.6% market share. Alibaba Cloud still carries out most of its activity in its homeland, but it began an international expansion in 2015. The provider is present in the United States, Australia, Japan, India, South East Asia and other countries. United Arab Emirates. The European conquest is more recent with the opening of a data center in Germany (in Frankfurt) in November 2016. Last October, Alibaba Cloud opened two new Availability Zones in the UK.

In terms of costs, Alibaba Cloud is more competitive than AWS. The  price benchmark between cloud offers is still a puzzle, as suppliers mix different services and price models with the aim of making any comparison difficult or impossible. The best standard for this exercise is the computing power provided by virtual machines. Alibaba’s Elastic Compute Service (ECS) is the well-known Elastic Compute Cloud (EC2) at AWS.

While AWS offers some 80  instances divided into five categories (common use, optimized calculation, optimized memory, accelerated calculation, optimized storage), Alibaba Cloud has about thirty. Which makes the comparison tricky. As part of this comparison, we opted for the Linux instances of both providers responding to a basic or current usage.

Amazon more affordable over time

The rates obtained in the table above correspond to the Frankfurt area for Alibaba Cloud and the Paris area for AWS. Both providers offer a monthly rate or on demand on the principle of pay as you go. For its reserved instances, AWS offers in parallel a decreasing rate depending on the duration of the commitment with discounts of between 29% and more than 60%. For its part, Alibaba Cloud markets prepaid monthly subscriptions.

These offers are difficult to compare, it is more advisable to retain the price on demand. The costs posted on this site are slightly lower for AWS. In the comparison made last June by ParkMyCloud, however, Alibaba is more competitive in other areas, especially in Asia. It also benefits from the difficulty of access of Western clouds to the Chinese market. In China, AWS had to divest infrastructure assets to its local partners, Sinnet, and Ningxia, in order to comply with the regulations.

ParkMyCloud notes, in general, that Alibaba is cheaper than AWS for a one-month subscription. In contrast, AWS has more affordable rates for longer instances and reserved instances . Alibaba can also, according to the cloud comparator, book nasty surprises. In particular, he advises to activate the “No charge for stopped instances” function to avoid being billed for unproductive instances.

Alibaba Cloud: future credible alternative?

In its latest study on laaS  published last May, Gartner is optimistic about the international reach of Alibaba Cloud. “Alibaba has the financial means to continue its global expansion […] and continues to invest strongly in R & D. It has the potential to become an alternative to global clouds in some regions”. Its main lever: “transfer success in China to foreign markets”.

For the consulting firm, Alibaba Cloud has a wide range of IaaS and PaaS offerings, comparable to the service portfolios of other suppliers. Among the hurdles, Gartner questions Alibaba Cloud’s ability to target traditional businesses that have not yet moved into the public cloud, as its hybrid cloud solution, Apsara Stack, is not priori – not yet available internationally. Its portal can also seem confusing, the user is not always clear what capabilities are available in each region.

Moreover, by wanting to replicate the strategy and the offer of its Western competitors, Alibaba Cloud would bring little differentiating elements. In their wake, he has expanded his IaaS in recent months by launching services in big data and artificial intelligence . Finally, international customers may perceive security and regulatory compliance issues through the use of a Chinese provider, even though Alibaba Cloud has been subject to audits by third-party firms.

What are the most popular Amazon cloud services?

The most used offers, the ones with the highest growth in terms of expenses … Here are the services that have the wind in their sails on AWS.

Athena is in pole position of Amazon’s fastest-growing cloud bricks in 2018. According to  the latest 2nd Watch barometer  of the most popular AWS services. To compile its ranking, the US publisher relied on the analysis of a fleet of 400 managed enterprise workloads and more than 200,000 instances backed by a managed public cloud environment. “It’s no wonder that Athena is providing the Amazon S3 storage service with a SQL layer that makes it easy to search and visualize data,” said Jérémie Rodon, cloud architect at D2SI, an AWS Devoteam affiliate. From 2017 to 2018, Athena costs increased by 68% among 2nd Watch customers.

Not far behind, Amazon EKS (Elastic Container Service for Kubernetes) makes a great breakthrough. Spending on it jumps 53% from one year to the next. “For years, engineering teams have been implementing Docker containers in-house, which largely explains the enthusiasm for Amazon’s container orchestrator, not to mention that it has an advantage for decoupled applications. micro-services base “, analyzes Jérémie Rodon.

In the top of 2nd Watch, the D2SI consultant notes the presence of SageMaker, the AWS solution designed for machine learning. Ranking sixth in the charts, “SageMaker records an increase of 21% over one year, still in terms of expenses,” it is estimated at 2nd Watch. Jérémie Rodon confirms: “Our customers are increasingly adopting this service, and it would not be surprising to see it top the list next year.” It must be said that SageMaker raking broad. It covers both the preparation of training datasets and the deployment of learning models through the delicate learning phase. “Its hyperparameter tuning features allow you to fine-tune the configuration of the algorithms automatically.

On the side of the most used AWS bricks, EC2 instances and S3 storage rise unsurprisingly in the lead alongside AWS Data Transfert which takes charge of billing data coming out of the cloud from Amazon to the Internet (see graph below). “These are key elements for managing the deployment of applications on AWS and making the most of it in terms of horizontal sizing of resources based on traffic,” said Jérémie Rodon at D2SI. Paving the way for a fully isolated network infrastructure, Amazon Virtual Private Cloud (VPC) is also one of Amazon’s most-consumed cloud products, according to 2nd Watch. Bringing the security layer, it allows to define its own range of IP addresses,

“Amazon SNS, SQS and SES are typical cloud components for building micro-services-based applications”

Just behind this leading pack, CloudWatch stands out. The Amazon Monitoring Console is present in 98% of 2nd Watch customers. Again, this is not a surprise. CloudWatch has become a de facto central solution for managing systems on AWS. This level of adoption (98%) is also achieved by AWS KMS, which is designed to secure data.

Other offers in the race include Amazon SNS (96%), AWS SQS (84%) and Amazon SES (80%), respectively pruned to push notifications, orchestrate the exchange of inter-application messages and send messages. emails. “These are typical cloud functions for building applications based on micro-components, it’s not surprising to see them back in this ranking,” recognizes Jeremiah Rodon. Used by the vast majority of D2SI customers to power software in serverless mode, Amazon Lambda is not far behind (72%). “Lambda is nestled in architectures to manage the”, decrypts Jeremiah Rodon.

In the top 10 of the most used AWS bricks, there are also two data server services. In mind: DynamoDB. Amazon’s flagship managed NoSQL database is implemented by all customers analyzed by 2nd Watch. The second is none other than Amazon RDS (Relational Database Service).

How to succeed in its cloud-driven digital transformation?

When it comes to migration to the cloud, it is reductive to consider the latter as a simple cost-effective solution for storing and hosting applications. Only companies that are ready to adopt agile and flexible approaches will reap the benefits.

It is clear that companies are still showing a certain reserve vis-à-vis the cloud, being reluctant to give the control of their data to a cloud provider and expressing concerns about the security of said data. These fears, however, are allayed by the fact that the benefits far outweigh the risks, and that they can be controlled and reduced.

Therefore, it is no longer a question of whether companies should migrate to the cloud, but rather how to optimize this migration.

The cloud, a financially relevant solution?

In some cases, using the cloud can be more cost-effective than owning data centers; in others, the cost can be significantly higher. On the other hand, the cost of the cloud migration process is often forgotten. The first few months can be particularly expensive, depending on the method used to implement this migration.

If it’s just moving existing applications from a data center to the cloud without modifying them, it’s likely that using the cloud is more expensive than the original system. Conversely, if applications are leveraging specific cloud features that improve their capabilities and efficiency, such as dynamic resource allocation, then the cloud is the most cost-effective option.

It is also important to consider the services offered by the cloud provider to the needs of the business. While high value-added services can facilitate the migration process, these capabilities have an additional cost that is added to the standard costs of the cloud and data centers and should be taken into account.

Data, still data, always data

While managing security issues is easier in the cloud, it’s important to give it serious consideration, especially in the context of the General Data Protection Regulation (GDPR) that came into effect this year.

Businesses can rely on a number of cloud-based cyber security solutions. Cloud providers are investing heavily in security because it is an essential part of their business. They have state-of-the-art security teams and a wide range of tools, either live or through other vendors, to help their customers integrate security into their own systems.

However, whether an application is hosted on-premises or in the cloud, it is up to each software team responsible for software development to properly deploy its application using these security systems correctly.

All of this is at the very heart of the GDPR and the restrictions that this regulation imposes on businesses to manage their data, whether they use cloud services or do everything internally. The cloud empowers businesses to help them enforce these restrictions and increase their security. But the cloud itself is neither a solution to security needs nor an aggravating factor. On the other hand, it provides a framework for providing secure applications and infrastructure for which full visibility of vulnerabilities, risks, and application performance is imperative.

What does the ideal migration look like?

Every migration presents a lot of challenges. While most companies often have a high level of expectations once the decision is made to migrate to the cloud, the reality is not always so simple. Generally, companies blame the different departments as soon as a problem occurs during the migration. Added to this is a pressure to claim victory before a team is ready.

In most companies that are moving to the cloud, software teams are having trouble keeping their promises to stakeholders. Successful migration to the cloud involves taking action during the planning and migration phases, as well as monitoring after migration. To do this, it is essential to instrument the applications early in the process to be able to quickly identify problems and gain a better understanding of what happens to the application during the different phases of the process. migration.

During the planning phase, it is important to identify the type of indicators that will be measured and to have a clear vision of the reasons for these measures. It is then necessary to monitor these Key Performance Indicators (KPIs) during migration to ensure success. You also need to create an inventory of resources and define their dependencies, in order to understand how the components of the application work together and set performance expectations for each one.

This level of visibility along with application-related performance data helps determine which resources fall within the scope of the migration, as well as an order in which to migrate those resources. This will, during the migration, assess whether cloud resources are actually improving the application’s operational capabilities. Through rigorous acceptance testing, it is possible to see that the objectives have been achieved or to reiterate the process until the desired results are achieved.

Accelerate change

The ability to gain confidence and demonstrate results is essential because cloud-based infrastructure is now beyond the control of the enterprise. It is also imperative to understand the impact of cloud technologies on applications and the customer experience before seeking to leverage the more advanced capabilities and services that the cloud offers.

The migration of production applications to the cloud raises many questions, including whether the performance impact is in line with expectations. Serverless dynamic technologies and distributed architectures all offer innovation and scalability capabilities. But it is important to be able to assess whether the impacts on performance, whether positive or negative, are appropriate to the application and will meet the needs of the business.

As we move forward with new technologies, it is also important to determine at which stage of the hype cycle they are, as the use of these technologies is often abusive. Indeed, fully embracing a new service without instrumentation runs the risk of ending up with an application that runs on inferior infrastructure. In addition, using a technology that is not yet mature is risky unless you understand the value of the technology, and you can not see that value without having a clear view of the impact of the technology on the technology. ‘application.

The cloud, a source of opportunities

Overall, migrating to the cloud provides many opportunities for businesses to seize and experience new ideas that take shape through cloud technologies. While industry trends and market-specific challenges may be different from one company to another, the constant for businesses is to keep the customer experience at the center of their decisions and stay flexible and agile in the face of business. evolution of the expectations of their customers. The cloud is opening this path for both respected institutions and new start-ups.

Cycloid raises 3 million euros to internationalize its DevOps platform

The Parisian start-up takes this opportunity to equip its platform with a graphical modeling console for infrastructure-oriented cloud architectures as code.

French publisher of a DevOps platform, Cycloid is closing its first fundraiser. A € 3 million loan was made to Orange Digital Ventures (ODV), Orange’s investment fund focused on seed-stage operations. With this contribution, the young growth intends to consolidate its presence on the French market while launching its commercial deployment internationally. In the spotlight: Germany, Spain, Great Britain, Israel and Sweden. To give itself the means of its ambitions, Cycloid intends to go from 25 employees to 50 within a year.

From code repository to production

Founded in 2015 by Benjamin Brial, former sales manager of eNovance (since acquired by Red Hat), Cycloid has already printed its brand in the French ecosystem of the cloud. As one of the few French players certified Advanced Technology and Consulting partner by AWS , the company targets both large accounts wishing to industrialize their DevOps approach and small teams seeking to equip it. Claiming France Télévisions, Frizbiz, Millésima or Warner Bros among its references, it has distinguished itself since 2017 by winning the EuroCloud France trophy in the start-up category .

Cycloid covers the whole DevOps cycle, from code repository to production. With the key, for each step of the workflow, the implementation of open source tools selected on the pane: Ansible (for configuration management), Docker and Kubernetes (for container architectures), Terraform (for infrastructure as code), Prometheus and Grafana (for monitoring), Vault (for access management) … With gateways to Slack, GitHub or Bitbucket, the product is built around an integrated console combining everything. It orchestrates the processes of continuous deployment and integration (CI / CD), the description and documentation of architectures, production routines, logs, not to mention the monitoring of events and the supervision of the consumption of cloud services and expenses. associated.

“Our continuous integration engine is natively based on the Concourse open source tool, knowing that it is always possible to connect a third-party CI application if needed,” comments Benjamin Brial. The same goes for the other components of the platform. “The solution is completely modular: for access management for example, we can connect a tool other than Vault, or use Elastic Stack to centralize the logs, etc.,” writes Benjamin Brial. Along with AWS, Cycloid claims the ability to deploy its product on any cloud.

“In the end, our offer combines the advantages of a CMP with those of a PaaS, but without being so restrictive”, argues Benjamin Brial. A catalog of services gives access to a series of pre-parameterized projects to deploy Drupal, Elasticsearch, Kibana, Magento infrastructures … With for each, the entire stack that goes well (CI / CD pipeline, configuration management, infrastructure as code …). With a private mode, the catalog can host in parallel the services specific to each client and distributed internally by the ISD.

Hitherto accessible on-premise or in cloud mode only on request, Cycloid benefits from its fundraising to decline its solution in the form of a public SaaS offer payable online by credit card. Its entry price is 45 euros per user per month. Given this hybrid approach, Cycloid is focusing on both a direct and indirect marketing strategy. “For the indirect, we are deploying a program of partnerships federating agencies, ESN and cloud providers,” said Benjamin Brial.

Coming soon on Google Cloud and Microsoft Azure

More interestingly, the company also announces to enrich its product with a graphical modeling console of cloud architectures cut for the visual development of infrastructure as code. It allows you to drag and drop the various cloud services required for an application: load balancing manager, front-end instances, database … Then configure their links and set their variables. At the end of the race, the corresponding Terraform code is automatically generated in the logic of a code editor in Wysiwyg . For the time being limited to AWS, Cycloid plans to extend the console to Google Cloud and Microsoft Azure by the end of the year.

The company is enriching its product with a graphical architecture cloud architecture console designed for the visual development of infrastructure as code. © Cycloid

Medical Data: 5 Steps to Limit Loss, System Downtime and Explosion of Management Costs

In the health sector, it is essential to have quick access to the sometimes vital data of patients. If doctors can not obtain them, the quality of care can seriously suffer and lead to unpredictable consequences.

Storage and data protection are therefore fundamental in this sector. System downtime must be avoided at all costs, even in the event of a major crisis such as a natural disaster, a hardware failure or a cyberattack.

This is where the first challenges emerge: storage, protection, and intelligent data management are becoming increasingly complex, time-consuming, and costly for healthcare organizations.

Understanding the triple challenge (management, storage and data protection) for health

There are several reasons for this situation. For starters, the volume of medical data is growing at an exponential rate, especially as a result of efforts to scan all records (DMPs).

Diagnostic devices, such as CT scanners, MRIs and other X-ray machines, are also involved because they produce massive amounts of imaging data. As these technologies improve, the generated image files are of better quality, with higher resolution and thus more and larger size. As hospitals are generally required to keep these images for seven years, in accordance with disaster recovery requirements, their image archives can grow by 40% per year, according to AT & T’s ForHealth service.

The Internet of Things is another challenge these organizations face in their data management process. Connected devices, such as fitness monitors and medical sensors, for example, produce their own data streams, all of which must be stored, protected, and managed.

All of this is taken into account, even small hospitals or medical practices can quickly generate volumes of data to be stored in petabyte order. But the challenge does not stop there. As volume increases, the time, budget, and resources needed to store, protect, and manage vital patient data also increase.

 Healthcare companies need a data protection and management solution that provides consistent, uninterrupted access. It must be reliable, but also affordable, so as not to burden their budget. The task seems so immense.

The solution must also protect these organizations, subject to a growing number of attacks of various kinds, such as ransomware. For example, they can infect information systems or patients’ vital data and take them hostage.

Hospitals are particularly susceptible to this kind of extortion because of their reliance on information in their patients’ records. Without rapid access to systems, health professionals simply can not do their job. For hackers, hospitals are the perfect target because they can not risk the lives of their patients and are therefore more likely to pay ransoms.

That’s one of the reasons why the health care sector has suffered more ransomware attacks than any other industry in 2017, according to a report by Beazley Group, a global cybersecurity insurance company. [1] The report concluded that 45% of all ransomware attacks in 2017 targeted this area.

The good news is that the market offers solutions designed to manage increasing amounts of medical data in a secure and cost-effective manner, ensuring that quality of care is never endangered by lack of access to vital information. .

This is a 5-step strategy for healthcare companies to protect themselves against the triple threat of exploding management costs, system downtime, and loss of data integrity.

1: Converge primary and secondary data storage

To properly manage the data explosion, healthcare organizations must take an approach that provides comprehensive protection and storage services in a single, integrated and easy-to-use system. By integrating primary, secondary, and cloud-based data management capabilities, organizations can eliminate storage and data silos while reducing the risk of downtime.

2: Take advantage of cost-effective and scalable storage

Small and medium-sized hospitals and medical practices face many challenges, often identical to those of larger health providers, despite their lower budgets. That’s why they need scalable storage that adapts to their data needs. Health organizations should be able to start with a single node for a capacity of a few terabytes, and then seamlessly and seamlessly move to several petabytes without configuration or application changes.

3: Protect yourself against data degradation

Medical images, in particular, are highly vulnerable to data degradation. Silent alteration of imaging data is therefore a significant problem, aggravated when existing programs store files such as X-rays in an image communication and archiving system, without being able to detect that the data is compromised. As a result, the information read from the existing storage system can be corrupted and unusable. Health organizations therefore need modern data management solutions that can protect against this type of degradation.  

4: Prevent ransomware attacks

Data protection is one of the top priorities for health care companies, as they are under the constant threat of cyberattacks. These organizations need to use robust protection throughout the data lifecycle, avoiding unnecessarily complex management. The answer to this challenge lies in the immutable storage of objects. Modern healthcare companies solve this problem by adopting a storage solution that continuously protects information and takes snapshots of data every 90 seconds. Since the storage of objects is immutable, these snapshots are not affected in case of attack. Health organizations can therefore retrieve the latest version of their data, and thus thwart ransomware attacks.  

5. Determine a method for calculating ROI 

If it is not obvious to assign a value to health data, it may be useful to quickly estimate an ROI for the protection of these data. In the United States, for example, due to the rise in cyberattacks, hospitals are looking for insurance policies that can provide coverage for data breaches or data loss. During the risk assessment, each medical file is assigned a dollar value by the insurance companies, which can quickly cost tens of millions of dollars in contributions. However, these can be reduced when hospitals can demonstrate that they have effective data protection and management strategies. Recently, a health provider facing a premium of

With an appropriate data management solution, organizations in the healthcare industry can protect their data and reduce costs, but they can also more easily fulfill their intrinsic function: better care for their patients and save more lives.

Cloud computing engineer: salary, job description, training …

The cloud computing engineer is responsible for managing the storage of corporate data in the cloud. The challenge is to secure the transferred data and make it easy for employees to access it from their workstation. Discover the salary of a cloud computing engineer, the job description, the trainings to follow …

The cloud computing engineer must find the companies that offer data storage services in the cloud (dematerialized servers) corresponding to the needs of the company for which he works. It studies the offers of the various service providers and manages the calls for tenders in order to find the ideal partner. He is then in charge of the effective implementation of the new cloud computing system and issues related to data accessibility. The cloud computing engineer can also be responsible for training employees on data storage. Its business has two components: a technical aspect for the implementation of storage solutions and a part focused on communication with the people who will be affected by the implementation of these new solutions.

Cloud Computing Engineer: Skills

The cloud computing engineer must have solid computer skills and in particular everything related to information security and data storage. Managerial skills are also sought after because he has to communicate effectively. 

Salary: 3300 € (junior salary) 
Education: Bac + 5 
Recommended high school diploma: all bachelor’s degrees

Cloud Computing Engineer: Training

Engineer’s degree in Computer Science, Computer 
Science and Networks 
Master’s degree in Computer Science in Multimedia and Networks Master’s degree in Computer Science, specialty networks 
Master’s Degree 
in Industrial Systems Engineering 
Cloud Computing and Mobility (Agile and Mobile Information Systems)

Cloud computing engineer: the curriculum

INSA Toulouse 
ENSSAT 
The Sorbonne
University of Picardy

Presence AI, a French AI that takes the opposite of Google Duplex

Headquartered in San Francisco, the start-up co-founded by two French proposes to manage the making of commercial meetings by SMS or via Alexa. She seduces the beauty sector.

Could a start-up developed in California by French counteract the plans of Google Duplex  ? For the record, the new Google Conversational Intelligence service offers to phone in the place of the user to make appointments. “Google Duplex offers to generate more commercial calls while companies want to reduce,” says Michel Meyer, one of his three co-founders who is also a personality of the hexagonal internet known to have founded Multimania (French community site , sold to Lycos in 2000). Created in 2016, its start-up took the problem differently.Alexa, Amazon’s voice assistant , waiting for Google’s one. The application also distinguishes itself from intelligent assistants like Julie Desk or Clara who automate the making of appointments by adding messaging and calendar.

The idea of ​​Presence AI came to Michel Meyer by consulting the telephone bills of his eldest daughter. He realized that she was no longer making phone calls, only exchanging text with her contacts. And she’s not the only one. According to him, fewer and fewer people drop their phones and this trend can only increase with the arrival of force Generation Y and Z, addicted to their smartphone. 

“We are entering the age of conversational internet,” says Michel Meyer. “Businesses must therefore optimize messaging and intelligent assistant interactions, especially since scheduling is a time-consuming activity, and more than 20% of meetings are moved or canceled.” By text or voice, AI Presence intends to allow businesses to be available immediately 24/7 to book, confirm or reschedule an appointment.

Incubated by the Alexa Accelerator by Techstars

To manage SMS, Presence AI leans on the fixed line of the company or creates a dedicated number. On the vocal side, the platform works closely with Alexa’s teams to optimize responses. From July to October 2018, she participated in the Amazon Alexa Accelerator incubation program powered by Techstars in Seattle. “With Alexa, orders are often simple,” notes Michel Meyer. “Taking appointments hides a certain level of complexity with the constraints of dates and the identification of the services requested.”

If a large number of appointments are managed automatically, Presence AI can also set multichannel exchanges to music. The customer initiates, for example, the conversation on Alexa and the sending of the confirmation of the niche is realized by SMS. In the meantime, making appointments required that a professional consult his agenda. For example, a customer of a hairdressing salon could ask for both a cut and a color with special requirements. The AI ​​will then hand over to his usual hairdresser.

Not only does Presence AI aim to achieve the highest conversion rate, but also to improve the frequency of visits. The artificial intelligence is going to prevent that the client it is time to make an appointment based on the history of past visits or prolonged absence. Each company adopts its strategy. “If a business passes for a given customer from six visits a year to seven, the impact on its turnover is immediate for a purchase cost virtually nil,” says Michel Meyer.

Faced with the fear that AI will replace employees, Michel Meyer believes that on the contrary it removes the burden of answering the phone to focus on higher value-added tasks. “At the end of the free trial period, it is also the people who are our best lawyers,” he says. In terms of price, Presence AI offers an entry-level version at $ 49 per month for less than 200 customers, a business version at $ 129 per month with an unlimited number of customers, and finally a business edition on estimate for a number unlimited sites and custom integrations.

Suggest additional sales

Presence AI raised funds in 2016 from business angels and two investment funds, the Frenchman Newfund and the American Blue Capital. More recently, Amazon.com and Techstars have invested $ 120,000 in the company. To increase its installed base, Presence AI plans to equip other messaging channels like WhatsApp but also to expand its field of prospection. So far, the young growth has focused on the health-beauty market (hairdressers or massage parlors, eyelash or nail care specialists) and related trades (sports classes, rental of ‘sports equipments). It integrates its application with management software focused on these segments, such as SalonBiz or Mindbody Booker. In the future, it plans to attack other sectors, including the automotive sector.

Presence AI also plans to support new scenarios. In the context of cross-selling and up-selling, “it will be to offer additional services,” says Michel Meyer. “You come tomorrow to the salon, we offer a massage of the head.You have opted for such oil during a massage session, do you want to buy it?”

In terms of development, Presence AI, which is currently present in 24 states, aims to continue its coverage of the United States but also expand internationally. The company has been contacted by several Anglo-Saxon companies in the United Kingdom, Australia or Canada. “When it comes to developing a new language, French will be at the top of the list with Spanish,” promises Michel Meyer. Meanwhile, the San Francisco-based start-up employs only six people. Alongside Michel Meyer, there are two other co-founders: Pierre Berkaloff of France, CTO, and John Kim in charge of customer follow-up.

“Product CEO” of Presence AI, Michel Meyer invested, in parallel, in a dozen start-ups he accompanies. Among them are Aircall (call center software publisher) and Algolia (famous  application search engine ), both also created by French people.

AI, new battlefield of Office 365 and G Suite

While Microsoft focuses on document creation support, Google is focusing on Gmail with help writing messages.

Historically, Office 365 has been the first productivity suite to integrate machine learning. This dimension was introduced with Delve in 2015. Objective of this brick? Offer the user a personalized and hierarchical view of his content (files, emails, conversations …) according to his relationship and documentary graph: his hierarchical history, collaboration, the mesh of his interests … Delve remains very little deployed to date. Feedback on the application is rare or non-existent. Less and less put forward, Delve is still maintained by Microsoft. It is also marketed in the French data centers of the American group. Four years later, Google is catching up. In mid-2018, the Mountain View company caused a sensation by equipping G Suite with a series of Gmail-based AI functions.

Office 365 G Suite
Help content creation x
Bots and team messaging x
Knowledge Management x
Smart messaging x
Unified and personalized document search x
Grammatical and semantic suggestions x x

At the heart of these new advances, Gmail has been given a possibility of smart reply offering predefined answers based on messages received. To an e-mail requesting a call “Wednesday at 11am or Thursday at 17h”, it will for example make several suggestions for replies: “Wednesday is perfect”, “Thursday suits me”, “The two of me go” or ” Neither of the two proposals suits me “. Simple and efficient.

“Google manages to decode fairly complex mail,” says Arnaud Rayrole, CEO of the French consulting firm Lecko, expert in collaborative solutions. “To a message asking to establish a new quote alongside other peripheral information, the smart reply will be able to put forward consistent suggestions: ‘Here is the modified quote’ or ‘excellent news, here is our new proposal'”. Thomas Poinsot, digital consultant at the French service company Spectrum Group, said: “The smart reply is a real plus, especially in a situation of mobility when there is little time to answer.” Google is already planning to extend it to its Hangouts IM.

Gmail: the AI ​​at the messaging service

Another lever of Gmail based on AI: the smart dials. For the time available only in English, this device “auto-complete” a message being input. By analyzing the typed terms, he finds the following most probable words given the context, and thus increases the speed of seizures. “This AI will expand to other languages ​​and get rich over time by identifying how you are addressing this or that person,” says the Mountain View giant (read the official post ).

Signed Google animation showing Gmail’s built-in email help feature. © Google

On the smart e-mail front, Office 365 is currently for absent subscribers. In terms of artificial intelligence, Office 365 R & D focuses on file creation support. Among the first affected Office bricks: PowerPoint. Based on the analysis of the presentations being created, the application recommends related content (images or texts) stored locally or on the web, which can be used for writing. It also uses image recognition to provide complementary photos (read the MSFT article ). 

Smart content in PowerPoint

Of course, G Suite is also developing this approach (read  the post ). But Microsoft pushes it further. In Excel, for example, the editor now includes a smart wizard called Ideas that identifies trends, patterns, or outliers in a table.  The automatic translation is also supported in Excel, PowerPoint and Word , with 60 languages ​​covered, but also in Stream for the automatic captioning of videos. In parallel, Microsoft continues to optimize the machine learning layer of SharePoint . On the program: decryption of images for character recognition and metadata extraction, or sentiment analysis and facial recognitionusing Azure Analytics Services. The same logic for Power BI, the data visualization application can use Azure Machine Learning to refine its processing, including identifying entities (organizations, people, locations). Via Azure ML, it even offers the possibility to build its own model of machine learning (read the post ).

Animation illustrating the capabilities of the intelligent assistant, Ideas, integrated into the right column of Excel. © Microsoft

Another battleground where AI comes into play: the search for content. Unsurprisingly, this is an area in which Google is showing a good head start. Called Google Search , ” the search engineGoogle Drive is very powerful, “says Arnaud Rayrole at Lecko.” It suggests results based on your favorite themes, the frequency of consultation and editing of a particular document. It gives access to a ranking of the results with regard to the authors with whom you exchange the most. “Only downside: Google Search does not go as far as managing skills.” It does not analyze the profiles of employees, their career and their a work history to unearth the knowledge available internally that can be useful for this or that project, “continues Arnaud Rayrole, an approach that Microsoft integrates via Delve or Yammer, the Office 365 enterprise social networking brick .

Google’s strategy more readable

In terms of documentary research, Microsoft still has a way to go before catching up with Google. The group has announced its intention to evolve in this field towards a unified experience. For now, several indexing enginescoexist within Office 365, each associated with an application (Yammer, Teams, Outlook, Drive ….) The publisher intends to consolidate them within a single base called Microsoft Search. R & D work that should be completed by the end of the first half of 2019. The promise? Provide an AI-based search environment that delivers personalized and consistent results across the entire platform. Eventually, the company of Satya Nadella even intends to tend to a common search interface covering not only Office 365 but also Windows 10 and the software package Dynamics 365.

For the consultants surveyed, these Office developments are generally in the right direction, but are less readable against the simple and pragmatic AI levers that draws Google in Gmail.

Last opposition ground: chatbots. Google and Microsoft both integrate this dimension into their respective suite through team messaging application: Hangouts Chat for the first and Teams for the second. Launched in early 2017, just over a year before Hangouts Chat, “Teams has significantly more conversational agents,” says Thomas Poinsot at Spectrum Group. “But like at Google, they are mostly limited to simple tasks, click-to-action.Rare are those equipped with a layer of conversational AI capable of providing the expected response by seeking the good applications. “

Algorithms that shape the business

“In the end, all these efforts have a common purpose: to help users manage ever-increasing volumes of data.” The goal is laudable, but AI is never neutral. no less than a vision of the world coded in algorithms, which ultimately gives Google and Microsoft the opportunity to guide the choices of an organization by prioritizing information according to their own rules, “warns Arnaud Rayrole. “Likewise, they encourage the evaluation of employee performance by following their logic.” On this point, the CEO of Lecko evokes the Office 365 MyAnalytics brick. “It provides KPIs inspired by American culture, for example the rate of users transmitting emails at non-working hours and not by time slots.

Artificial intelligence: what consequences for PIM, MDM and DAM?

Progress in AI is impressive. There are still limits but it is already possible to exploit them for product information management solutions, reference data, or in digital wealth management systems.

AI can do anything

Artificial intelligence, the media love it! And they talk a lot about it, including the powers that are given to him. And to convince the most reluctant, she managed to beat the man in the game of Go . Such an exploit was considered impossible for a machine. To calculate all the possibilities of game would be too long and the game of Go requires experiment and intuition … But the AlphaGo advances in Deep Learning won.

Thanks to this major technological breakthrough, we discover that artificial intelligence researches cancers, drives vehicles, dialogue in natural language … Artificial intelligence also makes coffee and is even the chef of the restaurant .

With such progress, we are made to believe that artificial intelligence is already capable of anything like programming in the place of man , composing music or creating works of art . Literature and sci-fi films make us aware that the uprising of machines is getting ready. Skynet will take power. Humanity is in danger because it loses control of these machines that are so superior to it.

Artificial intelligence does not think

Yet artificial intelligence does not think, at least the one that works today. AI is just a system of very advanced algorithms that are defined and controlled by humans. Artificial intelligence is able to exploit gigantic amounts of data (big data) to extract statistics and complex mathematical formulas that allow it to recognize and reproduce. And that’s the difference with Man. The AI ​​is, so far, not able to know and produce on its own. And such autonomy is not for tomorrow.
In a recent interview, Luc Julia, one of the designers of assistant Siri (Apple), recalls that ” artificial intelligence does not exist ” as he explains in a book of the same title. There is still a long way to go for artificial intelligence to become a real intelligence capable of consciousness, emotions, instinct, to work in several fields, to learn quickly by itself …

Recent advances in the field of machine learning and deep learning are based on a “learning” performed by the machine. This learning requires very large amounts of data and many iterations to be able to function. Google’s DeepMind victory over professional players at Starcraft II is a good example: it took the equivalent of nearly 200 years of training to reach this level of play . A gigantic time that shows that humans learn, happily, much faster. And they are able to capitalize on their experience to use it on new topics (to learn a new video game) while artificial intelligences are still specialized on very precisely defined tasks.

PIM / MDM / DAM and artificial intelligence

Today, product information management solutions (GIP or PIM), reference data (MDM) and digital asset management systems (DAM) can centralize large amounts of information (technical characteristics, logistics, marketing, tariffs …). The goal is to increase the quality of information and publish it effectively across different channels.

Using artificial intelligence in these solutions promises to solve all the problems encountered during the implementation of these software tools. Indeed, these systems can organize the information with a high level of finesse using many metadata. The quality of the information comes first and foremost from the fact that the users provide: a wrong price, an incoherent image, a duplicate form or incomplete properties … all these problems are caused by an incorrect data entry. 

With AI, we could have an intelligent system able to read all this information and identify what is wrong, correct errors and, better yet, capture the data.

The latest technologies already allow some of these actions. Today, the system can analyze your images and compare them to your product descriptions, translate texts automatically, analyze data and compare them to derive statistical rules and identify elements that do not respect them.
To properly perform such actions, it will be necessary to rely on existing “knowledge bases” (such as “Google Translate”, an artificial intelligence system that has already learned to translate into many languages) or develop your own database. knowledge. In this case, the task is complex because, for this database to be exploitable, it must contain a very large volume of data of very good quality . Not everyone has tens or hundreds of thousands of homogeneous product sheets of consistent quality to train their system:

  • If the quality of the data is uncertain or too fluctuating, the learning will be too and it will not produce good results,
  • If the amount of data is insufficient, the learning can not succeed,
  • If all the elements to be treated are too heterogeneous, no “statistical trend” will emerge properly.

Thus, the development of training games and control is a very delicate task. It requires the help of specialists (the famous data scientists). This is the most time-consuming task in machine learning projects .

As for the ability of machines to enter data or choose the right images, it all depends on the desired objective:

  • Prepare long descriptions from different characteristics or automatically associate images with the right products based on their metadata: this is already possible with “traditional” algorithms. What the AI ​​could do in addition is, from thousands of product sheets, analyze how the human has done to deduce the rules to apply. The utility is lower because users are usually able to formulate these rules directly.
  • Write appropriate marketing texts or choose the most rewarding image to present a product: such actions are subjective and rely on intuition. You have to have “the intelligence of situations”. The current artificial intelligence does not have it.

Exploiting progress in AI in the area of ​​PIM / MDM / DAM is already possible. There are still limits but they are pushed a little more each day.

Artificial intelligence and data capture: technologies that make the pair!

From Siri and Alexa to chatbots or robot traders, artificial intelligence has fundamentally changed many aspects of how we work and data capture is no exception.

Close your eyes and imagine depositing your invoices in a scanner, leaving and letting your computer archive / sort them so that you have only the “exceptions” to process before paying the bills. Do you think this is a dream still far from being realized? Not so sure.

Did you know ? Really intelligent capture software does not require templates, keywords, exact definitions, classifications or indexes to do a good job. Indeed, it can extract the right information and give meaning to a multitude of documents alone, whatever their size, format, language or symbols used.

Three ways in which artificial intelligence modifies data capture

With intelligent capture software, the AI-based “engine” can learn – like a new employee – to perform data entry. It can quickly extract contextual information and learn to interpret the patterns and characteristics of different types of documents. In addition, it can validate the data and provides additional protection, which employees can not achieve without tedious manual searches.

Intelligent data capture has changed the game for three main tasks: classification, extraction and validation.– Classification

With the classification, also called “sorting of documents”, the software learns to recognize different types of documents (when the user “teaches” some variations and examples). The automatic learning engine reduces the number of rules to be applied, which gives a high level of confidence in the classification of documents with a minimum of manual effort.– Extraction

Artificial intelligence has worked wonders for extracting data in semi-structured and unstructured documents. For example, consider identifying the invoice number, which typically involves creating complex templates, keywords, and links around specific domains and labels. A new employee can view a document and immediately locate invoice numbers, regardless of the form’s form. Now, software can do it too without the need for programming.– The validation

AI-driven research extends research with different tools. Thus, it can use different sources of information (such as an example, quantity, price, description, or amount) to link an article to the system database.

Working in tandem: intelligent capture and automation of robotic processes

The market for RPA (Robotic Process Automation) is booming. So far, it is delivering on its promise to automate complex, rule-based processes. Forrester expects a global market – with only a fraction of document capture – worth $ 2.9 billion in 2021, compared with just $ 250 million in 2016 (10 times more growth in five years).

In other words, the system itself becomes smarter.

In addition to the obvious advantages of automation, the use of intelligent data capture software also eliminates conjectures on the configuration side. It is important to note that the goal of AI-based data capture is not to replace humans, but to drive as much automation as possible with machines that can intelligently perform tasks. In the end, employees are freed from mundane tasks and can take on valuable tasks that require a human spirit to do things right.

In a world where information and documents are constantly changing, any company that wants to be successful must learn and adapt – ideally with technology that does the same thing.