Posted In:General Archives - AppFerret

standard

Facebook’s Decline of 2017 is Coming to Light

2019-02-18 - By 

In 2017 we didn’t just ditch Facebook, we changed how we use it. It’s not just the Teen exodus that has been going on for years, adults started to spend less time on Facebook in 2017 in a big way.

According to Pivotal Research analysis of Nielsen data done by Business Insider, the amount of time adults spent on Facebook declined 4% year-over-year (YoY) in November 2017. This coincided with a pretty epic decline in referral traffic.

Due to Facebook’s weakness in video, a lot of time we spend online is going to YouTube directly dropping the time we spend on Facebook. The decline in use means for time on site and referral traffic, Google is snapping up a larger share of people’s time spent online, and we know YouTube is the main engine for that growth.

FACEBOOK REFERRAL TRAFFIC DROP

According to the Shareaholic Traffic Report, Facebook’s share of visits fell 8% in 2017, with Google, Pinterest and Instagram benefiting from the slide. So with Facebook’s drop in activity and traffic; the web is actually a slightly better place with YouTube, Pinterest, Instagram, WhatsApp and even the likes of LinkedIn benefitting (though LinkedIn’s algorithm has become very problematic).

Facebook’s share of visits dropped 12.7% between the second half of 2016 and the second half of 2017, per the report.

If you think about all the millions of users Facebook is supposed to have, that’s a huge amount of traffic and usage. They will say it’s by design and creating a better feed, but the times they are changing (just ask Walmart and its plummeting stock).

Search regained the lead from social in 2017 — driving 34.8% of site visits in 2017, compared with 25.6% from social. In China we know the future of social media is bots and automated “like” farms. Instagram and Twitter have been trying to tackle this problem to keep social media more “human”, as the way we share experiences has shifted to Instagram like stories.

Twitter is profitable again and Pinterest drives shares of real value; but Facebook is still abhorrently messy and akin to a social media misinformation dystopia. Microsoft has turned its LinkedIn feed into a Facebook like experience, and that’s not a good thing. Snapchat can’t seem to get a simple redesign right as users riot and petition and leave for Instagram. Don’t even get me started on the actual value of using Instagram.

Facebook user behavior changed, decreasing the time spent by 5%, which totals about 50 million minutes per day. The time they do spend is increasingly focused on video viewing, which is less likely to link out to other sites — but in Video Facebook has already lost the game.

Micro video on Facebook used to go inherently viral just made the user experience feel spammy. Politics and echo bubbles made Facebook feel like a very bad and thwarted online forum. There are better places to keep in touch with friends and family from around the globe. Whoever gets their news from Facebook, has to be an idiot, or above fifty years old, or in some impoverished country.

YouTube, Flipboard, and LinkedIn also gained slightly in share of visits in 2017. Reddit and Pinterest still do pretty well for referral traffic. For North Americans this is of course a mixed blessing, if Facebook had become like what WeChat is in China, we’d feel even worse about the state of the internet. Unfortunately for consumers however, Facebook’s legion of apps just aren’t that valuable or convenient to actually engage with businesses or services.

THE DEATH OF ORGANIC TRAFFIC

Chartbeat data showed Facebook traffic to publishers declined 6 percent since the beginning of January. With Facebook’s pivot away from the Newsfeed and publishers and corporate brands, this means rising costs of Ads as less users actually spend time and are reachable on Facebook’s platform. In some design sense, Facebook has failed the mobile internet.

Facebook failed to create a competitor to YouTube, did not anticipate anything that Netflix became, was not able to provide a VR experience to lead the next-gen; all epic failures for its future growth. If Amazon is an ecosystem of growing value, Facebook is an online dystopia of putting profits ahead of people and the user experience.

You could not pay me to spend more time on Facebook’s legion of apps. I would have deleted my account years ago if it wasn’t that I need it for work. But the exodus of how we used to use social media that Facebook mirrors as platform that was weaponized; is an era of the old internet that’s never coming back. The walled gardens of the Duopoly really did ruin the internet. It will never be the same again.

Original article here.


standard

The Internet is Important to Everyone (Infographic)

2017-07-10 - By 

The Internet is not a luxury.  The Internet is important to everybody – Individuals, Companies, Governments and institutions of all kinds – 100%.  This infographic shows the relationship of each group to the Internet and how some groups are being left behind.

Full Infographic:

Original infographic here.


standard

10 new AWS cloud services you never expected

2017-01-27 - By 

From data scooping to facial recognition, Amazon’s latest additions give devs new, wide-ranging powers in the cloud

In the beginning, life in the cloud was simple. Type in your credit card number and—voilà—you had root on a machine you didn’t have to unpack, plug in, or bolt into a rack.

That has changed drastically. The cloud has grown so complex and multifunctional that it’s hard to jam all the activity into one word, even a word as protean and unstructured as “cloud.” There are still root logins on machines to rent, but there are also services for slicing, dicing, and storing your data. Programmers don’t need to write and install as much as subscribe and configure.

Here, Amazon has led the way. That’s not to say there isn’t competition. Microsoft, Google, IBM, Rackspace, and Joyent are all churning out brilliant solutions and clever software packages for the cloud, but no company has done more to create feature-rich bundles of services for the cloud than Amazon. Now Amazon Web Services is zooming ahead with a collection of new products that blow apart the idea of the cloud as a blank slate. With the latest round of tools for AWS, the cloud is that much closer to becoming a concierge waiting for you to wave your hand and give it simple instructions.

Here are 10 new services that show how Amazon is redefining what computing in the cloud can be.

Glue

Anyone who has done much data science knows it’s often more challenging to collect data than it is to perform analysis. Gathering data and putting it into a standard data format is often more than 90 percent of the job.

Glue is a new collection of Python scripts that automatically crawls your data sources to collect data, apply any necessary transforms, and stick it in Amazon’s cloud. It reaches into your data sources, snagging data using all the standard acronyms, like JSON, CSV, and JDBC. Once it grabs the data, it can analyze the schema and make suggestions.

The Python layer is interesting because you can use it without writing or understanding Python—although it certainly helps if you want to customize what’s going on. Glue will run these jobs as needed to keep all the data flowing. It won’t think for you, but it will juggle many of the details, leaving you to think about the big picture.

FPGA

Field Programmable Gate Arrays have long been a secret weapon of hardware designers. Anyone who needs a special chip can build one out of software. There’s no need to build custom masks or fret over fitting all the transistors into the smallest amount of silicon. An FPGA takes your software description of how the transistors should work and rewires itself to act like a real chip.

Amazon’s new AWS EC2 F1 brings the power of FGPA to the cloud. If you have highly structured and repetitive computing to do, an EC2 F1 instance is for you. With EC2 F1, you can create a software description of a hypothetical chip and compile it down to a tiny number of gates that will compute the answer in the shortest amount of time. The only thing faster is etching the transistors in real silicon.

Who might need this? Bitcoin miners compute the same cryptographically secure hash function a bazillion times each day, which is why many bitcoin miners use FPGAs to speed up the search. Anyone with a similar compact, repetitive algorithm you can write into silicon, the FPGA instance lets you rent machines to do it now. The biggest winners are those who need to run calculations that don’t map easily onto standard instruction sets—for example, when you’re dealing with bit-level functions and other nonstandard, nonarithmetic calculations. If you’re simply adding a column of numbers, the standard instances are better for you. But for some, EC2 with FGPA might be a big win.

Blox

As Docker eats its way into the stack, Amazon is trying to make it easier for anyone to run Docker instances anywhere, anytime. Blox is designed to juggle the clusters of instances so that the optimum number are running—no more, no less.

Blox is event driven, so it’s a bit simpler to write the logic. You don’t need to constantly poll the machines to see what they’re running. They all report back, so the right number can run. Blox is also open source, which makes it easier to reuse Blox outside of the Amazon cloud, if you should need to do so.

X-Ray

Monitoring the efficiency and load of your instances used to be simply another job. If you wanted your cluster to work smoothly, you had to write the code to track everything. Many people brought in third parties with impressive suites of tools. Now Amazon’s X-Ray is offering to do much of the work for you. It’s competing with many third-party tools for watching your stack.

When a website gets a request for data, X-Ray traces as it as flows your network of machines and services. Then X-Ray will aggregate the data from multiple instances, regions, and zones so that you can stop in one place to flag a recalcitrant server or a wedged database. You can watch your vast empire with only one page.

Rekognition

Rekognition is a new AWS tool aimed at image work. If you want your app to do more than store images, Rekognition will chew through images searching for objects and faces using some of the best-known and tested machine vision and neural-network algorithms. There’s no need to spend years learning the science; you simply point the algorithm at an image stored in Amazon’s cloud, and voilà, you get a list of objects and a confidence score that ranks how likely the answer is correct. You pay per image.

The algorithms are heavily tuned for facial recognition. The algorithms will flag faces, then compare them to each other and references images to help you identify them. Your application can store the meta information about the faces for later processing. Once you put a name to the metadata, your app will find people wherever they appear. Identification is only the beginning. Is someone smiling? Are their eyes closed? The service will deliver the answer, so you don’t need to get your fingers dirty with pixels. If you want to use impressive machine vision, Amazon will charge you not by the click but by the glance at each image.

Athena

Working with Amazon’s S3 has always been simple. If you want a data structure, you request it and S3 looks for the part you want. Amazon’s Athena now makes it much simpler. It will run the queries on S3, so you don’t need to write the looping code yourself. Yes, we’ve become too lazy to write loops.

Athena uses SQL syntax, which should make database admins happy. Amazon will charge you for every byte that Athena churns through while looking for your answer. But don’t get too worried about the meter running out of control because the price is only $5 per terabyte. That’s about 50 billionths of a cent per byte. It makes the penny candy stores look expensive.

Lambda@Edge

The original idea of a content delivery network was to speed up the delivery of simple files like JPG images and CSS files by pushing out copies to a vast array of content servers parked near the edges of the Internet. Amazon is taking this a step further by letting us push Node.js code out to these edges where they will run and respond. Your code won’t sit on one central server waiting for the requests to poke along the backbone from people around the world. It will clone itself, so it can respond in microseconds without being impeded by all that network latency.

Amazon will bill your code only when it’s running. You won’t need to set up separate instances or rent out full machines to keep the service up. It is currently in a closed test, and you must apply to get your code in their stack.

Snowball Edge

If you want some kind of physical control of your data, the cloud isn’t for you. The power and reassurance that comes from touching the hard drive, DVD-ROM, or thumb drive holding your data isn’t available to you in the cloud. Where is my data exactly? How can I get it? How can I make a backup copy? The cloud makes anyone who cares about these things break out in cold sweats.

The Snowball Edge is a box filled with data that can be delivered anywhere you want. It even has a shipping label that’s really an E-Ink display exactly like Amazon puts on a Kindle. When you want a copy of massive amounts of data that you’ve stored in Amazon’s cloud, Amazon will copy it to the box and ship the box to wherever you are. (The documentation doesn’t say whether Prime members get free shipping.)

Snowball Edge serves a practical purpose. Many developers have collected large blocks of data through cloud applications and downloading these blocks across the open internet is far too slow. If Amazon wants to attract large data-processing jobs, it needs to make it easier to get large volumes of data out of the system.

If you’ve accumulated an exabyte of data that you need somewhere else for processing, Amazon has a bigger version called Snowmobile that’s built into an 18-wheel truck complete with GPS tracking.

Oh, it’s worth noting that the boxes aren’t dumb storage boxes. They can run arbitrary Node.js code too so you can search, filter, or analyze … just in case.

Pinpoint

Once you’ve amassed a list of customers, members, or subscribers, there will be times when you want to push a message out to them. Perhaps you’ve updated your app or want to convey a special offer. You could blast an email to everyone on your list, but that’s a step above spam. A better solution is to target your message, and Amazon’s new Pinpoint tool offers the infrastructure to make that simpler.

You’ll need to integrate some code with your app. Once you’ve done that, Pinpoint helps you send out the messages when your users seem ready to receive them. Once you’re done with a so-called targeted campaign, Pinpoint will collect and report data about the level of engagement with your campaign, so you can tune your targeting efforts in the future.

Polly

Who gets the last word? Your app can, if you use Polly, the latest generation of speech synthesis. In goes text and out comes sound—sound waves that form words that our ears can hear, all the better to make audio interfaces for the internet of things.

Original article here.


standard

5 trends that will change the way you work in 2017 (videos)

2017-01-01 - By 

Robots are going to take a seat at the conference room table in 2017.

Humans are going to be more stressed than ever.

And to stay competitive with their new robot colleagues, workers are going to start taking smart drugs.

That’s according to futurist Faith Popcorn, the founder and CEO of the consultancy Faith Popcorn’s BrainReserve. Since launching in 1974, she has helped Fortune 500 companies including MasterCard, Coca-Cola, P&G and IBM.

Here are five trends you can expect to see in the workplace in 2017, according to Popcorn.

1.Coffee alone won’t keep you competitive.

Employees are going to start taking a burgeoning class of cognitive enhancers called nootropics, or “smart drugs.” These nutritional supplements don’t all have the same ingredients but they reportedly increase physical and mental stamina.

Silicon Valley has been an early adopter of the bio-hacking trend. That’s perhaps unsurprising, as techies were also the first to try the likes of food substitute Soylent. There’s an active sub-reddit page dedicated to the topic.

Nootropics will go mainstream in 2017 because “the robots are edging us out,” says Popcorn. “When you come to work you have to be enhanced, you have to be on the edge, you have to be able to work longer and harder. You have to be able to become more important to your company.”

 

2.Robots will rise.

Unskilled blue-collar workers will be the first to lose their jobs to automation, but robots will eventually replace white-collar workers, too, says Popcorn, pointing to an Oxford University study that found 47 percent of U.S. jobs are at risk of being replaced.

“Who would you rather have do your research? A cognitive computer or a human?” says Popcorn. “Human error is a disaster. … Robots don’t make mistakes.”

 

3.Everyone will start doing the hustle.

Already, more than a third of the U.S. workforce are freelancers and will generate an estimated $1 trillion in revenue, according to a survey released earlier this fall by the Freelancers Union and the freelancing platform Upwork. The percentage of freelancers will increase in 2017 and beyond, she believes. “It’s accelerating every year,” says Popcorn.

She also points to some large companies that are building offices with fewer seats than employees. Citibank built an office space in Long Island City, Queens, with 150 seats for 200 employees and no assigned desks to encourage a fluid-feeling environment.

And Popcorn points to the rise of the side hustle: People “need more money than they are being paid,” she says. And they don’t trust their employers. “People are saying, ‘I want to have two or three hooks in the water. I don’t want to devote myself to one company.'”

Younger employees in particular are not interested in working for large, legacy companies like those their parents worked for, according to research Popcorn has done. “We are really turned off on ‘big.'”

 

4.There will be tears.

While people have always been emotional beings, historically emotions haven’t belonged inside the office. That’s basically because workplaces have largely been run by men. But that’s changing.

“The female entry into the workplace has brought emotional intelligence into the workplace and that comes with emotion,” says Popcorn. “There is a lot of anxiety about the future, there is a lot of stress-related burnout and we are seeing more emotion being displayed in the workplace.”

That doesn’t mean you should start crying on your boss’s shoulder, though. Especially if your boss is male. While women tend to be more comfortable with their feelings, men are still uncomfortable with elevated levels of emotion, says Popcorn, admitting that these gender-based observations are generalizations.

“WE ARE SEEING MORE EMOTION BEING DISPLAYED IN THE WORKPLACE.”

-Faith Popcorn, futurist

Going forward, the futurist expects to see more stress rooms in office buildings and “more of a recognition that people are living under a crushing amount of anxiety.” A stress room would be a welcoming space for employees to go to take a break and perhaps drink kava, a relaxing, root-based tea.

Open floor plans don’t give employees any place to breathe, Popcorn points out: “It’s like being watched 24/7.” Employees put in earbuds to approximate privacy, but sitting in open spaces is not conducive to employee mental health. “It is very stressful to work in the open floors,” she says. “It’s good for real estate, you can do it with fewer square feet, but it’s not particularly good for people.”

5.The boundary between work and play will crumble.

“People are going to be working 24 hours a day,” says Popcorn. Technology has enabled global, constant communication. The WeLive spaces that WeWork launched are indicative of this trend towards work and life integration, she says. “There is no line between work and play.”

 

 

Original article here.


standard

Tech trends for 2017: more AI, machine intelligence, connected devices and collaboration

2016-12-30 - By 

The end of year or beginning of year is always a time when we see many predictions and forecasts for the year ahead. We often publish a selection of these to show how tech-based innovation and economic development will be impacted by the major trends.

A number of trends reports and articles have bene published – ranging from investment houses, to research firms, and even innovation agencies. In this article we present headlines and highlights of some of these trends – from Gartner, GP Bullhound, Nesta and Ovum.

Artificial intelligence will have the greatest impact

GP Bullhound released its 52-page research report, Technology Predictions 2017, which says artificial intelligence (AI) is poised to have the greatest impact on the global technology sector. It will experience widespread consumer adoption, particularly as virtual personal assistants such as Apple Siri and Amazon Alexa grow in popularity as well as automation of repetitive data-driven tasks within enterprises.

Online streaming and e-sports are also significant market opportunities in 2017 and there will be a marked growth in the development of content for VR/AR platforms. Meanwhile, automated vehicles and fintech will pose longer-term growth prospects for investors.

The report also examines the growth of Europe’s unicorn companies. It highlights the potential for several firms to reach a $10 billion valuation and become ‘decacorns’, including BlaBlaCar, Farfetch, and HelloFresh.

Alec Dafferner, partner, GP Bullhound, commented, “The technology sector has faced up to significant challenges in 2016, from political instability through to greater scrutiny of unicorns. This resilience and the continued growth of the industry demonstrate that there remain vast opportunities for investors and entrepreneurs.”

Big data and machine learning will be disruptors

Advisory firm Ovum says big data continues to be the fastest-growing segment of the information management software market. It estimates the big data market will grow from $1.7bn in 2016 to $9.4bn by 2020, comprising 10 percent of the overall market for information management tooling. Its 2017 Trends to Watch: Big Data report highlights that while the breakout use case for big data in 2017 will be streaming, machine learning will be the factor that disrupts the landscape the most.

Key 2017 trends:

  • Machine learning will be the biggest disruptor for big data analytics in 2017.
  • Making data science a team sport will become a top priority.
  • IoT use cases will push real-time streaming analytics to the front burner.
  • The cloud will sharpen Hadoop-Spark ‘co-opetition’.
  • Security and data preparation will drive data lake governance.

Intelligence, digital and mesh

In October, Gartner issued its top 10 strategic technology trends for 2017, and recently outlined the key themes – intelligent, digital, and mesh – in a webinar.  It said that autonomous cars and drone transport will have growing importance in the year ahead, alongside VR and AR.

“It’s not about just the IoT, wearables, mobile devices, or PCs. It’s about all of that together,” said Cearley, according to hiddenwires magazine. “We need to put the person at the canter. Ask yourself what devices and service capabilities do they have available to them,” said David Cearley, vice president and Gartner fellow, on how ‘intelligence everywhere’ will put the consumer in charge.

“We need to then look at how you can deliver capabilities across multiple devices to deliver value. We want systems that shift from people adapting to technology to having technology and applications adapt to people.  Instead of using forms or screens, I tell the chatbot what I want to do. It’s up to the intelligence built into that system to figure out how to execute that.”

Gartner’s view is that the following will be the key trends for 2017:

  • Artificial intelligence (AI) and machine learning: systems that learn, predict, adapt and potentially operate autonomously.
  • Intelligent apps: using AI, there will be three areas of focus — advanced analytics, AI-powered and increasingly autonomous business processes and AI-powered immersive, conversational and continuous interfaces.
  • Intelligent things, as they evolve, will shift from stand-alone IoT devices to a collaborative model in which intelligent things communicate with one another and act in concert to accomplish tasks.
  • Virtual and augmented reality: VR can be used for training scenarios and remote experiences. AR will enable businesses to overlay graphics onto real-world objects, such as hidden wires on the image of a wall.
  • Digital twins of physical assets combined with digital representations of facilities and environments as well as people, businesses and processes will enable an increasingly detailed digital representation of the real world for simulation, analysis and control.
  • Blockchain and distributed-ledger concepts are gaining traction because they hold the promise of transforming industry operating models in industries such as music distribution, identify verification and title registry.
  • Conversational systems will shift from a model where people adapt to computers to one where the computer ‘hears’ and adapts to a person’s desired outcome.
  • Mesh and app service architecture is a multichannel solution architecture that leverages cloud and serverless computing, containers and microservices as well as APIs (application programming interfaces) and events to deliver modular, flexible and dynamic solutions.
  • Digital technology platforms: every organization will have some mix of five digital technology platforms: Information systems, customer experience, analytics and intelligence, the internet of things and business ecosystems.
  • Adaptive security architecture: multilayered security and use of user and entity behavior analytics will become a requirement for virtually every enterprise.

The real-world vision of these tech trends

UK innovation agency Nesta also offers a vision for the year ahead, a mix of the plausible and the more aspirational, based on real-world examples of areas that will be impacted by these tech trends:

  • Computer says no: the backlash: the next big technological controversy will be about algorithms and machine learning, which increasingly make decisions that affect our daily lives; in the coming year, the backlash against algorithmic decisions will begin in earnest, with technologists being forced to confront the effects of aspects like fake news, or other events caused directly or indirectly by the results of these algorithms.
  • The Splinternet: 2016’s seismic political events and the growth of domestic and geopolitical tensions, means governments will become wary of the internet’s influence, and countries around the world could pull the plug on the open, global internet.
  • A new artistic approach to virtual reality: as artists blur the boundaries between real and virtual, the way we create and consume art will be transformed.
  • Blockchain powers a personal data revolution: there is growing unease at the way many companies like Amazon, Facebook and Google require or encourage users to give up significant control of their personal information; 2017 will be the year when the blockchain-based hardware, software and business models that offer a viable alternative reach maturity, ensuring that it is not just companies but individuals who can get real value from their personal data.
  • Next generation social movements for health: we’ll see more people uniting to fight for better health and care, enabled by digital technology, and potentially leading to stronger engagement with the system; technology will also help new social movements to easily share skills, advice and ideas, building on models like Crohnology where people with Crohn’s disease can connect around the world to develop evidence bases and take charge of their own health.
  • Vegetarian food gets bloodthirsty: the past few years have seen growing demand for plant-based food to mimic meat; the rising cost of meat production (expected to hit $5.2 billion by 2020) will drive kitchens and laboratories around the world to create a new wave of ‘plant butchers, who develop vegan-friendly meat substitutes that would fool even the most hardened carnivore.
  • Lifelong learners: adult education will move from the bottom to the top of the policy agenda, driven by the winds of automation eliminating many jobs from manufacturing to services and the professions; adult skills will be the keyword.
  • Classroom conundrums, tackled together: there will be a future-focused rethink of mainstream education, with collaborative problem solving skills leading the charge, in order to develop skills beyond just coding – such as creativity, dexterity and social intelligence, and the ability to solve non-routine problems.
  • The rise of the armchair volunteer: volunteering from home will become just like working from home, and we’ll even start ‘donating’ some of our everyday data to citizen science to improve society as well; an example of this trend was when British Red Cross volunteers created maps of the Ebola crisis in remote locations from home.

In summary

It’s clear that there is an expectation that the use of artificial intelligence and machine learning platforms will proliferate in 2017 across multiple business, social and government spheres. This will be supported with advanced tools and capabilities like virtual reality and augmented reality. Together, there will be more networks of connected devices, hardware, and data sets to enable collaborative efforts in areas ranging from health to education and charity. The Nesta report also suggests that there could be a reality check, with a possible backlash against the open internet and the widespread use of personal data.

Original article here.


standard

Gartner’s Top 10 Strategic Technology Trends for 2017

2016-12-05 - By 

Artificial intelligence, machine learning, and smart things promise an intelligent future.

Today, a digital stethoscope has the ability to record and store heartbeat and respiratory sounds. Tomorrow, the stethoscope could function as an “intelligent thing” by collecting a massive amount of such data, relating the data to diagnostic and treatment information, and building an artificial intelligence (AI)-powered doctor assistance app to provide the physician with diagnostic support in real-time. AI and machine learning increasingly will be embedded into everyday things such as appliances, speakers and hospital equipment. This phenomenon is closely aligned with the emergence of conversational systems, the expansion of the IoT into a digital mesh and the trend toward digital twins.

Three themes — intelligent, digital, and mesh — form the basis for the Top 10 strategic technology trends for 2017, announced by David Cearley, vice president and Gartner Fellow, at Gartner Symposium/ITxpo 2016 in Orlando, Florida. These technologies are just beginning to break out of an emerging state and stand to have substantial disruptive potential across industries.

Intelligent

AI and machine learning have reached a critical tipping point and will increasingly augment and extend virtually every technology enabled service, thing or application.  Creating intelligent systems that learn, adapt and potentially act autonomously rather than simply execute predefined instructions is primary battleground for technology vendors through at least 2020.

Trend No. 1: AI & Advanced Machine Learning

AI and machine learning (ML), which include technologies such as deep learning, neural networks and natural-language processing, can also encompass more advanced systems that understand, learn, predict, adapt and potentially operate autonomously. Systems can learn and change future behavior, leading to the creation of more intelligent devices and programs.  The combination of extensive parallel processing power, advanced algorithms and massive data sets to feed the algorithms has unleashed this new era.

In banking, you could use AI and machine-learning techniques to model current real-time transactions, as well as predictive models of transactions based on their likelihood of being fraudulent. Organizations seeking to drive digital innovation with this trend should evaluate a number of business scenarios in which AI and machine learning could drive clear and specific business value and consider experimenting with one or two high-impact scenarios..

Trend No. 2: Intelligent Apps

Intelligent apps, which include technologies like virtual personal assistants (VPAs), have the potential to transform the workplace by making everyday tasks easier (prioritizing emails) and its users more effective (highlighting important content and interactions). However, intelligent apps are not limited to new digital assistants – every existing software category from security tooling to enterprise applications such as marketing or ERP will be infused with AI enabled capabilities.  Using AI, technology providers will focus on three areas — advanced analytics, AI-powered and increasingly autonomous business processes and AI-powered immersive, conversational and continuous interfaces. By 2018, Gartner expects most of the world’s largest 200 companies to exploit intelligent apps and utilize the full toolkit of big data and analytics tools to refine their offers and improve customer experience.

Trend No. 3: Intelligent Things

New intelligent things generally fall into three categories: robots, drones and autonomous vehicles. Each of these areas will evolve to impact a larger segment of the market and support a new phase of digital business but these represent only one facet of intelligent things.  Existing things including IoT devices will become intelligent things delivering the power of AI enabled systems everywhere including the home, office, factory floor, and medical facility.

As intelligent things evolve and become more popular, they will shift from a stand-alone to a collaborative model in which intelligent things communicate with one another and act in concert to accomplish tasks. However, nontechnical issues such as liability and privacy, along with the complexity of creating highly specialized assistants, will slow embedded intelligence in some scenarios.

Digital

The lines between the digital and physical world continue to blur creating new opportunities for digital businesses.  Look for the digital world to be an increasingly detailed reflection of the physical world and the digital world to appear as part of the physical world creating fertile ground for new business models and digitally enabled ecosystems.

Trend No. 4: Virtual & Augmented Reality

Virtual reality (VR) and augmented reality (AR) transform the way individuals interact with each other and with software systems creating an immersive environment.  For example, VR can be used for training scenarios and remote experiences. AR, which enables a blending of the real and virtual worlds, means businesses can overlay graphics onto real-world objects, such as hidden wires on the image of a wall.  Immersive experiences with AR and VR are reaching tipping points in terms of price and capability but will not replace other interface models.  Over time AR and VR expand beyond visual immersion to include all human senses.  Enterprises should look for targeted applications of VR and AR through 2020.

Trend No. 5: Digital Twin

Within three to five years, billions of things will be represented by digital twins, a dynamic software model of a physical thing or system. Using physics data on how the components of a thing operate and respond to the environment as well as data provided by sensors in the physical world, a digital twin can be used to analyze and simulate real world conditions, responds to changes, improve operations and add value. Digital twins function as proxies for the combination of skilled individuals (e.g., technicians) and traditional monitoring devices and controls (e.g., pressure gauges). Their proliferation will require a cultural change, as those who understand the maintenance of real-world things collaborate with data scientists and IT professionals.  Digital twins of physical assets combined with digital representations of facilities and environments as well as people, businesses and processes will enable an increasingly detailed digital representation of the real world for simulation, analysis and control.

Trend No. 6: Blockchain

Blockchain is a type of distributed ledger in which value exchange transactions (in bitcoin or other token) are sequentially grouped into blocks.  Blockchain and distributed-ledger concepts are gaining traction because they hold the promise of transforming industry operating models in industries such as music distribution, identify verification and title registry.  They promise a model to add trust to untrusted environments and reduce business friction by providing transparent access to the information in the chain.  While there is a great deal of interest the majority of blockchain initiatives are in alpha or beta phases and significant technology challenges exist.

Mesh

The mesh refers to the dynamic connection of people, processes, things and services supporting intelligent digital ecosystems.  As the mesh evolves, the user experience fundamentally changes and the supporting technology and security architectures and platforms must change as well.

Trend No. 7: Conversational Systems

Conversational systems can range from simple informal, bidirectional text or voice conversations such as an answer to “What time is it?” to more complex interactions such as collecting oral testimony from crime witnesses to generate a sketch of a suspect.  Conversational systems shift from a model where people adapt to computers to one where the computer “hears” and adapts to a person’s desired outcome.  Conversational systems do not use text/voice as the exclusive interface but enable people and machines to use multiple modalities (e.g., sight, sound, tactile, etc.) to communicate across the digital device mesh (e.g., sensors, appliances, IoT systems).

Trend No. 8: Mesh App and Service Architecture

The intelligent digital mesh will require changes to the architecture, technology and tools used to develop solutions. The mesh app and service architecture (MASA) is a multichannel solution architecture that leverages cloud and serverless computing, containers and microservices as well as APIs and events to deliver modular, flexible and dynamic solutions.  Solutions ultimately support multiple users in multiple roles using multiple devices and communicating over multiple networks. However, MASA is a long term architectural shift that requires significant changes to development tooling and best practices.

Trend No. 9: Digital Technology Platforms

Digital technology platforms are the building blocks for a digital business and are necessary to break into digital. Every organization will have some mix of five digital technology platforms: Information systems, customer experience, analytics and intelligence, the Internet of Things and business ecosystems. In particular new platforms and services for IoT, AI and conversational systems will be a key focus through 2020.   Companies should identify how industry platforms will evolve and plan ways to evolve their platforms to meet the challenges of digital business.

Trend No. 10: Adaptive Security Architecture

The evolution of the intelligent digital mesh and digital technology platforms and application architectures means that security has to become fluid and adaptive. Security in the IoT environment is particularly challenging. Security teams need to work with application, solution and enterprise architects to consider security early in the design of applications or IoT solutions.  Multilayered security and use of user and entity behavior analytics will become a requirement for virtually every enterprise.

Original article here.


standard

We need universal basic income because robots will take all the jobs – Musk

2016-11-12 - By 

We may need to pay people just to live in an automated world, says space biz baron.

Elon Musk reckons the robot revolution is inevitable and it’s going to take all the jobs.

For humans to survive in an automated world, he said that governments are going to be forced to bring in a universal basic income—paying each citizen a certain amount of money so they can afford to survive. According to Musk, there aren’t likely to be any other options.

“There is a pretty good chance we end up with a universal basic income, or something like that, due to automation,” he told CNBC in an interview. “Yeah, I am not sure what else one would do. I think that is what would happen.”

The idea behind universal basic income is to replace all the different sources of welfare, which are hard to administer and come with policing costs. Instead, the government gives everyone a lump sum each month—the size of which would vary depending on political beliefs—and they can spend it however they want.

Switzerland, a country with high wages and high employment, recently held a referendum on giving its people 2,500 Swiss francs (£2,065) per month, plus 625 francs (£516) per child. It was ultimately rejected by a wide margin by the country’s fairly conservative electorate, who generally thought it would give people too much for free.

President Obama has also floated the idea in a confab with Wired: “Whether a universal income is the right model—is it gonna be accepted by a broad base of people?—that’s a debate that we’ll be having over the next 10 or 20 years.”

Robots have already replaced numerous blue collar manufacturing jobs, and are taking over more and more warehousing and logistics roles. Some—perhaps prematurely—are fretting about future AIs being developed to replace professions such as doctors and lawyers. Already, moves are being made in that direction, with chatbots which can get people off parking tickets, and an AI that can predict cases at the European Court of Human Rights. Doctors should be looking over their shoulders, too.

Musk isn’t necessarily downbeat on the automated future, however. He thinks that in the future “people will have time to do other things, more complex things, more interesting things,” and they’ll “certainly have more leisure time.” And then, he added, “we gotta figure how we integrate with a world and future with a vast AI.”

“Ultimately,” he said, “I think there has to be some improved symbiosis with digital super intelligence.”

Original article here.


standard

Jobs That Didn’t Exist 5 Years Ago

2016-10-23 - By 

When LinkedIn took it upon themselves to dig into its more than 250 million LinkedIn members profiles, it found a number of tech job titles that had come into existence over the last few years.

Wanting an infographic that profiled these hot job titles in an engaging way, LinkedIn hired Visually to design a graphic that covered the unique skills and trending numbers of these job titles. View it here orclick here to see it on LinkedIn’s site.


standard

Can Your Startup Answer These 23 Pitch Competition Questions?

2016-10-11 - By 

Asked and answered. These real-life pitch questions from Steve Case’s Rise of the Rest tour can help give you an edge on your next pitch.

Pitch competitions are a reality of startup life, as common as coffee mugs that say “Hustle Harder” or thought leaders expounding on the need for “grit.”

Still, even the smartest entrepreneur isn’t always ready for what competition judges might ask. During Steve Case’s Rise of the Rest tour, a seven-city road trip across the U.S. highlighting entrepreneurs outside the major startup hubs, founders in Phoenix participated in their own mock pitch competition, allowing them to practice and polish their answers.  

We’ve collected a curated selection of questions during the competition, some asked more often than others. 

To prepare for your next competition, get prepping with these potential questions:

Goals

1. What’s your top priority in the next six months? What metric are you watching the most closely?
2. What’s your exit strategy?

The basics

3. How does your product/service work?
4. Who is your customer?
5. Do you have contracts and if so, how often do they renew?

The team

6. Why is your team the team to bring this to market?
7. You say you’ll have 100 staffers in five years. You have six now. What will those new staffers do?

Advantages

8. Why is your product/service better than what’s already on the market?
9. Who are your competitors?
10. Do you have a patent?

Related: Join Entrepreneur on the Road This Week With Steve Case’s ‘Rise of the Rest’

Partnerships

11. If you win the investment, what would that partnership look like?
12. You’ve secured a strategic partnership. Is that partnership exclusive? And if not, is that a liability?

Growth

13. What’s your barrier to capacity?
14. What’s your expansion strategy?

Pricing and revenue

15. How much of your revenue is from upsells? And how do you see that changing over time?
16. Everyone says they can monetize the data they collect. What’s your plan?
17. Can you explain your revenue model?
18. What’s your margin?
19. Are you charging too little?
20. Are you charging too much?

What’s ahead

21. How will you get to 1 million users?
22. Is this trend sustainable?
23. What regulatory approvals do you need and how have you progressed so far?

Original article here.


standard

Four points on why digital transformation is a big deal for the future of IT

2016-10-05 - By 

For the IT sector, the concept of digital transformation represents a time for evolution, revolution and opportunity, according to Information Technology Association of Canada (ITAC) president Robert Watson.

The new president for the technology association made the statements at last week’s IDC Directions and CanadianCIO Symposium in Toronto. The tech trends event was co-hosted by ITWC and IDC with support from ITAC.

Notable sessions included the ITWC-moderated Digital Transformation panel — which featured veteran CIOs discussing the digital transformation opportunities and challenges— and IDC Canada’s Nigel Wallis outlining why Canadian business models should shift to reap IoT rewards.

Digital transformation refers to the changes associated with the application of digital technology in all aspects of human society; the overarching event theme focused on digital transformation as more than mere buzzword, but as process that tech leaders and organizations should already be adopting. Considering the IT department is the “substance of every industry,” it follows that information technology can play a key role in setting the pace for innovation and future developments, offered Watson.

Both the public and private sectors are looking to diversify operations and economies — the IT sector will be the leaders and enable development of emerging technologies including the Internet of Things (IoT): “It is coming for sure and a fantastic opportunity.”

With that in mind, here are four key takeaways from the event.

“Have you ever seen a more dynamic, exciting, and scary time in our industry?”

IDC’s senior vice president and chief analyst Frank Gens outlined reasons why IT is currently entering an “innovation stage” with the era of the Third Platform, which refers to emerging tech such as cloud technology, mobile, social and IoT.

According to IDC, the Third Platform is anticipated to grow to approximately 20 per cent by the year 2020; eighty per cent of Fortune 100 companies are expected to have digital transformation teams in place by the end of this year.

“It’s about a new foundation for enterprise services. You can connect back-end AI to this growing edge of IoT…you are really talking about collective learning and accelerated learning around the next foundation of enterprise solutions,” said Gens.

Takeaway: In a cloud- and mobile-dominated IT world, enterprises should look to quickly develop platform- and API-based services across their network, noted Gens, while also looking to grow the developer base to use those services.

“Robotics is an extremely vertical driven solution.”

Think of that classic 1927 film Metropolis, and its anthropomorphic robot Maria: While IT has come a long way from Metropolis in terms of developments in robotics, the industry isn’t quite there yet. But we’re close, noted IDC research analyst Vlad Mukherjee, and the industry should look at current advancements in the field.

According to Mukherjee, robotics are driving digital transformation processes by establishing new revenue streams and changing the way we work.

Currently, robotics tech is classified in terms of commercial service, industry and consumer. Canadian firms in total are currently spending $1.08 B on the technology, Mukherjee said.

Early adopters are looking at reducing costs; this includes the automotive and manufacturing sectors, but also fields such as  healthcare, logistics, and resource extraction. In the case of commercial service robotics, the concept works and the business case is there, but not at the point where we can truly take advantage, he said.

The biggest expense for robotics is service, maintenance, and battery life, said Mukherjee.

Takeaway: Industrial robots are evolving to become more flexible, easier to setup, support more human interaction and be more autonomously mobile. Enterprises should keep abreast of robotics developments, particularly the rise of collaborative industrial robots which have a lower barrier for SME adoption. This includes considering pilot programs and use cases that explore how the technology can help improve operations and automated processes.

“China has innovated significantly in terms of business models that the West has yet to emulate.”

Analysts Bryan Ma, vice-president of client devices for IDC Asia-Pacific, and Krista Collins, research manager of mobility and consumer for IDC Canada, outlined mobility trends and why the mobility and augmented or virtual reality markets seen in the east will inevitably make their way to Canada.

China is no longer considered a land of cheap knockoffs, said Ma, adding consider the rise of companies like Xiaomi, considered “The Apple of China.”

Globally, shipments of virtual reality (VR) hardware are expected to skyrocket this year, according to IDC’s forecasts. It expects shipments to hit 9.6 million units worldwide, generating $2.3 billion mostly for the four lead manufacturers: Samsung, Sony, HTC, and Oculus.

With VR in its early days, both Ma and Collins see the most growth potential for the emerging medium coming from the consumer market. Gaming and even adult entertainment options promise to be the first use-cases for mass adoption, with applications in the hospitality, real estate, or travel sectors coming later.

“That will be bigger on the consumer side of the market,” Collins said. “That’s what we’ll see first here in Canada and in other parts of the world.”

Takeaway: Augmented reality (AR) headsets will take longer to ramp up, IDC expects. In 2016, less than half a million units will ship. That will quickly climb to 45.6 million units by 2020, chasing the almost 65 million expected shipments of VR headsets. But unlike VR, the first applications for AR will be in the business world.

“Technology is integrated with everything”

There are currently more than 3.8 billion mobile phones on the planet — just think of the opportunities, offered David Senf, vice president of infrastructure solutions for IDC Canada.

He argued that digital transformation is an even bigger consideration than security — and responding to business concerns is a top concern for IT in 2016. IT staff spent two weeks more “keeping the lights on” in 2015 versus being focused on new, innovative projects. This has to change, said Senf.

IT is living in an era of big data and advanced analytics. As cloud technology matures — from just being software-as-a-service (SaaS) to platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) — CIOs should think about the cloud in a new way. Instead of just the cloud, it’s a vital architecture that should be supporting the business.

“Organizations are starting to define what that architecture looks like,” said Senf, adding the successful ones understand that the cloud is a competitive driver, particularly from an identity management, cost, and data residency perspective.

Takeaway: If the infrastructure isn’t already ready for big data, it might already be behind the curve. Senf notes CIOs should ensure that the IT department is able to scale quickly for change — and is ready to support the growing demands of the business side, including mobility public cloud access.

Get ready to experiment and become comfortable with data sources and analysis. This includes looking at the nature of probabilistic findings — and using PaaS, he added.

Read more: http://www.itworldcanada.com/article/the-future-of-it-four-points-on-why-digital-transformation-is-a-big-deal/383121#ixzz4MFnnfcAh
or visit http://www.itworldcanada.com for more Canadian IT News


standard

UK-based hyper-convergence startup bets on ARM processors

2016-10-03 - By 

Cambridge, U.K.-based startup Kaleao Ltd.  is entering the hyper-converged systems market today with a platform based on the ARM chip architecture that it claims can achieve unparalleled scalability and performance at a fraction of the cost of competing systems.

The KMAX platform features an integrated OpenStack cloud environment and miniature hypervisors that dynamically defines physical computing resources and assigns them directly to virtual machines and applications. These “microvisors,” as Kaleao calls them, dynamically orchestrate global pools of software-defined and hardware-accelerated resources with much lower overhead than that of typical hypervisors. Users can still run the KVM hypervisor if they want.

The use of the ARM 64-bit processor distinguishes Kaleao from the pack of other hyper-converged vendors such as VMware Inc., Nutanix Inc. and SimpliVity Inc., which use Intel chips. ARM is a reduced instruction set computing-based architecture that is commonly used in mobile devices because of its low power consumption.

“We went with ARM because the ecosystem allows for more differentiation and it’s a more open platform,” said Giovanbattista Mattiussi, principal marketing manager at Kaleao. “It enabled us to rethink the architecture itself.”

One big limitation of ARM is that it’s unable to support the Windows operating system or VMware vSphere virtualization manager. Instead, Kaleao is bundling Ubuntu Linux and OpenStack, figuring those are the preferred choices for cloud service providers and enterprises that are building private clouds. Users can also install any other Linux distribution.

Kaleao said the low-overhead of its microvisors, combined with the performance of RAM processors, enables it to deliver 10 times the performance of competing systems at least than one-third of the energy consumption. Users can run four to six times as many microvisors as hypervisors, Mattiussi said. “It’s like the VM is running on the hardware with no software layers in between,” he said. “We can pick up a piece of the CPU here, a piece of storage there. It’s like having a bare-bones server running under the hypervisor.”

The platform provides up to 1,536 CPU cores, 370 TB of all-flash storage with 960 gigabytes per second of networking in a 3u rack. Energy usage is less than 15 watts per eight-core server. “Scalability is easy,” Mattiussi said. “You just need to add pieces of hardware.”

KMAX will be available in January in server and appliance versions. The company hasn’t released pricing but said its cost structure enables prices in the range of $600 to $700 per server, or about $10,000 for a 16-server blade. It plans to sell direct and through distributors. The company has opened a U.S. office in Charlotte, NC and has European outposts in Italy, Greece and France.

Co-founders Giampietro Tecchiolli and John Goodacre have a long track record of work in hardware and chip design, and both are active in the Euroserver green computing project. Goodacre continues to serve as director of technology and systems at ARM Holdings plc, which make the ARM processor.

Kaleao has raised €3 million and said it’s finalizing a second round of €5 million.

Original article here


standard

8 digital skills we must teach our children

2016-09-11 - By 

The social and economic impact of technology is widespread and accelerating. The speed and volume of information have increased exponentially. Experts are predicting that 90% of the entire population will be connected to the internet within 10 years. With the internet of things, the digital and physical worlds will soon be merged. These changes herald exciting possibilities. But they also create uncertainty. And our kids are at the centre of this dynamic change.

Children are using digital technologies and media at increasingly younger ages and for longer periods of time. They spend an average of seven hours a day in front of screens – from televisions and computers, to mobile phones and various digital devices. This is more than the time children spend with their parents or in school. As such, it can have a significant impact on their health and well-being. What digital content they consume, who they meet online and how much time they spend onscreen – all these factors will greatly influence children’s overall development.

The digital world is a vast expanse of learning and entertainment. But it is in this digital world that kids are also exposed to many risks, such as cyberbullying, technology addiction, obscene and violent content, radicalization, scams and data theft. The problem lies in the fast and ever evolving nature of the digital world, where proper internet governance and policies for child protection are slow to catch up, rendering them ineffective.

Moreover, there is the digital age gap. The way children use technology is very different from adults. This gap makes it difficult for parents and educators to fully understand the risks and threats that children could face online. As a result, adults may feel unable to advise children on the safe and responsible use of digital technologies. Likewise, this gap gives rise to different perspectives of what is considered acceptable behaviour.

So how can we, as parents, educators and leaders, prepare our children for the digital age? Without a doubt, it is critical for us to equip them with digital intelligence.

Digital intelligence or “DQ” is the set of social, emotional and cognitive abilities that enable individuals to face the challenges and adapt to the demands of digital life. These abilities can broadly be broken down into eight interconnected areas:

Digital identity: The ability to create and manage one’s online identity and reputation. This includes an awareness of one’s online persona and management of the short-term and long-term impact of one’s online presence.

Digital use: The ability to use digital devices and media, including the mastery of control in order to achieve a healthy balance between life online and offline.

Digital safety: The ability to manage risks online (e.g. cyberbullying, grooming, radicalization) as well as problematic content (e.g. violence and obscenity), and to avoid and limit these risks.

Digital security: The ability to detect cyber threats (e.g. hacking, scams, malware), to understand best practices and to use suitable security tools for data protection.

Digital emotional intelligence: The ability to be empathetic and build good relationships with others online.

Digital communication: The ability to communicate and collaborate with others using digital technologies and media.

Digital literacy: The ability to find, evaluate, utilize, share and create content as well as competency in computational thinking.

Digital rights: The ability to understand and uphold personal and legal rights, including the rights to privacy, intellectual property, freedom of speech and protection from hate speech.

Above all, the acquisition of these abilities should be rooted in desirable human values such as respect, empathy and prudence. These values facilitate the wise and responsible use of technology – an attribute which will mark the future leaders of tomorrow. Indeed, cultivating digital intelligence grounded in human values is essential for our kids to become masters of technology instead of being mastered by it.

Original article here.


standard

Why is there so much bullshit? (infographic)

2016-09-06 - By 

Did you ever wonder why you spend so much of your day wading through bullshit? Every worker must consume masses of information, but most of it is poorly written, impenetrable, and frustrating to consume. How did we get here?

I’ve actually studied this question. In fact, Chapter 2 of my book explains it in detail. Basically:

  • Reading on screens all day impairs our attention. According to Chartbeat, a person reading a news article online gives it an average of only 36 seconds of attention. Forrester Research reports that the only people who read more media in print than online are those 70 years old and older. It’s a noisy, jam-packed world of text that we all navigate, and that makes it harder for us to pay attention to what we read.
  • No one edits what we read. Compared to decades ago, most of what we consume is unedited. It’s first draft emails and self-created blog posts and Facebook updates. Even what passes for news these days gets a lot less editorial attention than it used to, and it shows.
  • We learned to write the wrong way. Our writing teachers have failed to prepare us for today’s business world. The sterile, formulaic five-paragraph theme of high school English gives way to the college professor who gives the best grades to the longest, wordiest papers. Writing for on-screen readers needs to be brief and pointed, but nobody teaches that in school — or at work.

Want to spread the word? Share the infographic below which puts it all together.

Blogging note: my new site design goes live today. Same content, different package. If you have design comments, please send them to me here rather than as comments on this post.

Original article here.


standard

How Cloud Computing Is Changing the Software Stack

2016-09-01 - By 

Are sites, applications, and IT infrastructures leaving the LAMP stack (Linux, Apache, MySQL, PHP) behind? How have the cloud and service-oriented, modular architectures facilitated the shift to a modern software stack?

As more engineers and startups are asking the question “Is the LAMP stack dead?”—on which the jury is still out—let’s take a look at “site modernization,” the rise of cloud-based services, and the other ever-changing building blocks of back-end technology.

From the LAMP Era to the Cloud

Stackshare.io recently published its findings about the most popular components in many tech companies’ software stacks these days—stacks that are better described as “ecosystems” due to their integrated, interconnected array of modular components, software-as-a-service (SaaS) providers, and open-source tools, many of which are cloud-based.

It’s an interesting shift. Traditional software stacks used to be pretty cut and dry. Acronyms like LAMP, WAMP, and MEAN neatly described a mix of onsite databases, servers, and operating systems built with server-side scripts and frameworks. When these systems grow too complex, though, the productivity they enable can be quickly eclipsed by the effort it takes to maintain them. This is up for debate, though, and anything that’s built well from the ground up should be sturdy and scalable. However, a more modular stack approach still prompted many to make the shift.

A shift in the software stack status quo?

For the last five or so years, the monolith, LAMP-style approach has come more into question whether it’s the best possible route. Companies are migrating data and servers to the cloud, opting for streamlined API-driven data exchange, and using SaaS and PaaS solutions as super-scalable ways to build applications. In addition, they’re turning to a diverse array of technologies that can be more easily customized and integrated with one another—mainly JavaScript libraries and frameworks—allowing companies to be more nimble, and less reliant on big stack architectures.

But modularity is not without its complexities, and it’s also not for everyone. SaaS, mobile, and cloud-computing companies are more likely to take a distributed approach, while financial, healthcare, big data, and e-commerce organizations are less likely to. With the right team, skills, and expectations, however, it can be a great fit.

New, scalable building blocks like Nginx, New Relic, Amazon EC2, and Redis are stealing the scene as tech teams work toward more modular, software-based ecosystems—and here are a few reasons why.

What are some of the key drivers of this shift?

1. Continuous deployment

What’s the benefit of continuous deployment? Shorter concept-to-market development cycles that allow businesses to give customers new features faster, or adjust to what’s happening with traffic.

It’s possible to continuously deploy with a monolith architecture, but certain organizations are finding this easier to do beyond a LAMP-style architecture. Having autonomous microservices allows companies to deploy in chunks continuously, without dependencies and the risk of one failure causing another related failure. Tools like GitHub, Amazon EC2, and Heroku allow teams to continuously deploy software, for example, in an Agile sprint-style workflow.

2. The cloud is creating a new foundation

Cloud providers have completely shaken up the LAMP paradigm. Providers like Amazon Web Services (AWS) are creating entirely new foundations with cloud-based modules that don’t require constant attention, upgrades, and fixes. Whereas stacks used to comprise a language (Perl, Python, or PHP), a database (MySQL), a server, operating system, application servers, and middleware, now there are cloud modules, APIs, and microservices taking their place.

3. Integration is simplified

Tools need to work together, and thanks to APIs and modular services, they can—and without a lot of hassle. Customer service platforms need to integrate with email and databases, automatically. Many of the new generation of software solutions not only work well together, they build on one another and can become incredibly powerful when paired up, for example, Salesforce’s integrated SaaS.

4. Elasticity and affordable scalability

Cloud-based servers, databases, email, and data processing allow companies to rapidly scale up—something you can learn more in this Intro to Cloud Bursting article. Rather than provision more hardware and more time (and space) that it takes to set that hardware up, companies can purchase more space in the cloud on demand. This makes it easier to ramp up data processing. AWS really excels here, and is a top choice for companies like Upwork, Netflix, Adobe and Comcast have built their stacks with its cloud-based tools.

For areas like customer service, testing, analytics, and big data processing, modular components and services also rise to the occasion when demand spikes.

5. Flexibility and customization

The beauty of many of these platforms is that they come ready to use out the box—but with lots of room to tweak things to suit your needs. Because the parts are autonomous, you also have the flexibility to mix and match your choice of technologies—whether those are different programming languages or frameworks and databases that are particularly well-suited to certain apps or projects.

Another thing many organizations love is the ability to swap out one component for another without a lot of back-end reengineering. It is possible to replace parts in a monolith architecture, but for companies that need to get systems up and running fast—and anticipate a spike in growth or a lack of resources—modular components make it easy to swap out one for another. Rather than trying to adapt legacy technology for new purposes, companies are beginning to build, deploy, and run applications in the cloud.

6. Real-time communication and collaboration

Everyone wants to stay connected and communicate—especially companies with distributed engineering teams. Apps that let companies communicate internally and share updates, information, and more are some of the most important parts of modern software stacks. Here’s where a chat app like HipChat comes in, and other software like Atlassian’s JIRA, Confluence, Google Apps, Trello, and Basecamp. Having tools like these helps keep everyone on the same page, no matter what time zone they’re in.

7. Divvying up work between larger teams and distributed teams

By moving architectures to distributed systems, it’s important to remember that the more complicated a system is, the more a team will have to keep up with a new set of challenges: things that come along with cloud-based systems like failures, eventual consistency, and monitoring. Moving away from the LAMP-style stack is as much a technical change as it is a cultural one; be sure you’re engaging MEAN stack engineers and DevOps professionals who are skilled with this new breed of stack.

So what are the main platforms shaking up the stack landscape?

The Stackshare study dubbed this new generation of tech companies leaving LAMP behind as “GECS companies”—named for their predominant use of GitHub, Amazon EC2, and Slack, although there are many same-but-different tools like these three platforms.

Upwork has moved its stack to AWS, a shift that the Upwork engineering team is documenting on the Upwork blog. These new platforms offer startups and other businesses more democratization of options—with platforms, cloud-based servers, programming languages, and frameworks that can be combined to suit their specific needs.

  • JavaScript: JavaScript is the biggest piece of the new, post-LAMP pie. Think of it as the replacement for the “P” (PHP) in LAMP. It’s a front-end scripting language, but it’s so much more—it’s a stack-changer. JavaScript is powerful for both the front-end and back-end, thanks to Node.js, and is even outpacing some mobile technologies. Where stacks were once more varied between client and server, JavaScript is creating a more fluid, homogeneous stack, with a multitude of frameworks like Backbone, Express, Koa, Meteor, React, and Angular.
  • Ruby and Python also dominate the new back-end stack, along with Node.js.
  • Amazon Web Services (AWS): The AWS cloud-based suite of products is the new foundation for many organizations, offering everything from databases and developer tools to analytics, mobile and IoT support, and networking.
  • Computing platforms: Amazon EC2, Heroku, and Microsoft Azure
  • Databases: PostgreSQL, with some MongoDB and MySQL.

The good news? There’s no shortage of Amazon Web Services pros, freelance DevOps engineers, and freelance data scientists who are skilled in all of these newer platforms and technologies and poised to help companies get new, cloud-based stacks up and running.

Read more at http://www.business2community.com/brandviews/upwork/cloud-computing-changing-software-stack-01644544#kEgMIdXIW7Q0ZpOt.99


standard

Happy Birthday Linux! Celebrating 25 years of the open source Linux operating system

2016-08-28 - By 

This last week marks the 25th birthday of Linux, the free operating system which now sits at the core of our modern world.

On this day, 25 years ago, Linus Torvalds started what would become one of the most prominent examples of free and open-source collaboration – and it all started with a simple message on the comp.os.minx message board.

“ “I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things),” Torvald wrote.

Since then Linux has grown into the open source beast that it is today, with everything from companies and supercomputers to mobile phones relying on the OS.

In celebration of the 25 year milestone, CBR talked to a company known for pioneering the original open source model – Red Hat. Sitting down with Ellie Burns,  Martin Percival, Senior Solutions Architect at Red Hat, talked about past, current and future innovations being driven by Linux and open source technology.

 

EB: It is the 25th birthday of Linux – or is it? There is some confusion over whether it is Aug 25th or Oct 5th, when the first public release was made. So are you celebrating today?

MP: August 25th 1991 was the day that Linux was first announced to the public which is why Red Hat is celebrating it today. Of course people do argue over this date, as the first kernel (0.1) was released in September 1991 and Linus Torvalds started working on Linux way before August 25th –  however, I prefer to stick to the day the world found out it existed.

 

EB: What do you think has been the biggest mile-stone or breakthrough in Linux’s 25 year history?

MP: The creation of Linux itself was almost the biggest breakthrough. The fact that we could work together on an operating system that was available for the whole world to use was a radical idea. Linux offered a solid platform that people could not only rely on, but could also modify for their own needs. Since then Linux has enabled many technology innovations.  Over the years, Linux has continued to evolve, keeping up with hardware developments and new thinking in the way that we use software. This has kept it at the forefront of usable, useful operating systems.

Since its launch, a huge milestone for Linux was the rise of professional open source and the resulting acceptance and implementation of Linux in the enterprise. Organisations such as Red Hat made this possible by adapting Linux to make it work smoothly and securely for the enterprise and its needs. The implementation of virtualisation and container support in Linux and its use in most major cloud solutions are significant innovations that have radically changed how businesses can approach their computing problems.

 

EB: How has Linux changed the world since its inception in 1991?

MP: Linux has become a powerful driver in the ways in which we conduct our daily lives. Almost every item around us that uses technology from phones, to supercomputers to TVs and much more is running on Linux. Often we do not even notice it, for example, when you walk in central London, black cabs with advertising screens are at every corner. These screens are running on Linux but I am sure most people wouldn’t know that. Linux is everywhere, even in the most unexpected and benign objects. There are all sorts of unusual uses for Linux, in fact, smart fridges (with screens showing information such as temperature or the current time), play stations, IoT devices and systems as well as traffic control systems are all examples of common objects enabled by Linux.

Linux has enabled technology to be weaved into the world around us as more devices and objects are increasingly using complex technology. It has helped change the world into an environment where everything and everyone is connected together through technology and which is highly influenced by social media. Linux has been at the centre of all these changes in people’s lives for the last 25 years..

It has not only changed society, but also the enterprise and IT. Linux long ago reached the tipping point of being accepted in the enterprise datacenter. The work carried out by Red Hat and its business model has largely made this possible and accelerated this process, by giving organisations the kind of quality and support assurance for Linux that they would demand for any other product.

 

EB: What do you think is the biggest innovation that Linux made possible?

MP: It has to be “the cloud”. Before Linux, enterprises relied on hardware from Unix vendors such as SGI or HP to be able to run their own computing power. This bundling of hardware and operating system was very expensive, as every manufacturer locked the two together in its own unique and very profitable way. In these circumstances, it’s hard to imagine how the current cloud computing landscape could have ever started. The sheer cost of filling a cloud datacentre would have been prohibitive.

The beauty of Linux was that it grew initially on commodity hardware, bringing ever increasing processor power to the user without the lock-in of the proprietary model. This, in turn, has given organisations the ability to install more machines at a much lower cost, making it easier to imagine, and finally realise, the world of cloud computing today; where countless servers, all running Linux, provide the computing backbone for our lives.

Now we see Linux being used, not only as an operating system in its own right, but also as the foundation for private cloud offerings like OpenStack and container runtimes like Red Hat’s Openshift Container Platform.

 

EB: What does the future hold for Linux?

MP: The future for Linux is probably tied up with our lifestyle. Today our lives are closely linked and influenced by social media and everything is accessible publicly. This is thanks to the Internet and the information we can easily share – and this sharing is made possible, in many cases, by Linux operating systems.

The open source community picks up new ways of thinking very quickly and is efficient at driving software change to match that thinking. Invariably, as contributors write this code, Linux also evolves as an operating system, handling new hardware and ways of working that never existed 25 years ago.

In many ways, Linux is the ultimate adaptable creature, always matching functionality to requirements. It has enabled drastic IT innovations already and many more hardware advances will be happening, especially with the rise of IoT and cloud computing. More IT challenges will no doubt appear within the business world and the community will constantly have to adapt to these changes and challenges in order to make IT simpler, safer and better adapted to people’s and business’ needs.

 

EB: What do you think Linux will look like in 25 years’ time, at the grand old age of 50?

MP: In 25 years, Linux has helped us accomplish many innovations that no one would have even thought of back in 1991, let alone thought possible, and in the next 25 years, we will see even greater changes as IT evolves. In the future Linux will allow us to be even more connected perhaps in different ways to the way we interact today. I am convinced Linux will be alive and well and be more embedded in our lives than ever before; perhaps parts of Linux will be running in our own bodies and there will be all sorts of artificial intelligence in play around us! 

Linux remains well-poised to continue to lead IT innovation for the enterprise, but to remain leader for the next 25 years, Linux will need to focus on:

Security

Linux will serve as the frontline in enterprise security. We will diagnose the earliest lines of codes to detect potential flaws and vulnerabilities in order to prevent any damage.Hardware advances

Linux is the standardiser for the eco-system of chipsets and hardware approaches and the community will have to continue to adapt as more innovations arise

Linux containers

This revolution will shake up Linux and open source in the next 25 years. Linux will drive towards scalability at an almost infinitesimal level.

The “next” next

Linux will be renowned as the platform for innovation to identify “what’s next”. Linux will constantly adapt components in the kernel and add new ones to be constantly ahead of IT innovations and the point of reference of the future of IT whatever it may be.

Original article here.


standard

Is Most Published Research Wrong? (video)

2016-08-26 - By 

It sounds crazy but there are logical reasons why the majority of published research literature is false. This video digs into the reasons for this and what’s being done about it.

Original YouTube link is here.


standard

The 100 Best Free Google Chrome Extensions

2016-08-25 - By 

It’s been an up and down couple of years for Google’s Chrome Web browser.

When we first did a version of this story in January 2015, Chrome had about 22.65 percent of the browser market worldwide, according to Net Applications. As of July 2016, Chrome has 50.95 percent—it crossed paths with Microsoft Internet Explorer in March, when both hit 39 percent. IE continues to dwindle, as does Firefox and Safari. Only Chrome and Microsoft’s new Edge browser have gained, but Edge is only at 5.09 percent of the market.

Then it lost some kudos—from us. After several years as PCMag’s favorite browser, a resurgent Firefox took our Editors’ Choice award. The reason: Chrome lags in graphics hardware acceleration, and it isn’t exactly known for respecting user privacy (just like its parent company).

That said, Chrome remains a four-star tour de force for Web surfing, with full HTML5 support and speedy JavaScript performance. Obviously, there is no denying its popularity. And, like Firefox before it, it’s got support for extensions that make it even better. Its library of extras, found at the Chrome Web Store, is more than rival Firefox has had for years. Also, the store has add-ons to provide quick access to just about every Web app imaginable.

Rather than having you stumble blindly through the store to find the best add-ons, we’ve compiled a list of 100 you should consider. Several are unique to Google and its services (such as Gmail), which isn’t surprising considering who made Chrome. Most extensions work across operating systems, so you can try them on any desktop platform; there may be some versions that work on the mobile Chrome, too.

All of these extensions are free; there’s no harm in giving them all a try—you can easily disable or remove them by typing chrome://extensions/ into the Chrome address bar and right-click an extension’s icon in the toolbar to remove it. As of Chrome version 49, every extension must have a toolbar icon; you can hide them without uninstalling the extension with a right-click and selecting “Hide in Chrome Menu.” You can’t get rid of the icons for good.

Read on for our favorites here, and let us know if we missed a great one!


standard

Google has made a big shift in its plan to give everybody faster internet: from wired to wireless

2016-08-16 - By 

People loved the idea of Google Fiber when it was first announced in 2010. Superfast internet that’s 100 times faster than the norm — and it’s cheap? It sounded too good to be true.

But maybe that initial plan was a little too ambitious.

Over the last several years, Google has worked with dozens of cities and communities to build fiber optic infrastructure that can deliver gigabit speeds to homes and neighborhoods — this would let you stream videos instantly or download entire movies in seconds.

But right now, introducing Google Fiber in any town is a lengthy, expensive process. Google first needs to work with city leaders to lay the groundwork for construction, and then it needs to lay cables underground, along telephone lines, and in houses and buildings.

This all takes time and money: Google has spent hundreds of millions of dollars on these projects, according to The Wall Street Journal, and the service is available in just six metro areas, an average of one per year.

Given these barriers, Google Fiber is reportedly working on a way to make installation quicker, cheaper, and more feasible. According to a new filing with the Federal Communications Commission earlier this month, Google has been testing a new wireless-transmission technology that “relies on newly available spectrum” to roll out Fiber much more quickly.

“The project is in early stages today, but we hope this technology can one day help deliver more abundant internet access to consumers,” a Google spokesperson told Business Insider.

And, according to The Journal, Google is looking to use this wireless technology in “about a dozen new metro areas, including Los Angeles, Chicago and Dallas.”

Right now, Google Fiber customers can pay $70 a month for 1-gigabit-per-second speeds and an extra $60 a month for the company’s TV service. It’s unclear if this wireless technology would change the pricing, but at the very least, it ought to help accelerate Fiber’s expansion and cut down on installation costs.

One of the company’s recent acquisitions could help this transition. In June, Google Fiber bought Webpass, a company that knows how to wirelessly transmit internet service from fiber-connected antennas to antennas mounted on buildings. It’s a concept that’s pretty similar to Starry, another ambitious company that wowed us earlier this year with its plan for a superfast, inexpensive internet service.

Original article here.


standard

A New Era for Application Building, Management and User Access

2016-08-15 - By 

Every decade, a set of major forces work together to change the way we think about “applications.” Until now, those changes were principally evolutions of software programming, networked communications and user interactions.

In the mid-1990s, Bill Gates’ famous “The Internet Tidal Wave” letter highlighted the rise of the internet, browser-based applications and portable computing.

By 2006, smart, touch devices, Software-as-a-Service (SaaS) and the earliest days of cloud computing were emerging. Today, data and machine learning/artificial intelligence are combining with software and cloud infrastructure to become a new platform.

Microsoft CEO Satya Nadella recently described this new platform as “a third ‘run time’ — the next platform…one that doesn’t just manage information but also learns from information and interacts with the physical world.”

I think of this as an evolution from software to dataware as applications transform from predictable programs to data-trained systems that continuously learn and make predictions that become more effective over time. Three forces — application intelligence, microservices/serverless architectures and natural user interfaces — will dominate how we interact with and benefit from intelligent applications over the next decade.

In the mid-1990s, the rise of internet applications offered countless new services to consumers, including search, news and e-commerce. Businesses and individuals had a new way to broadcast or market themselves to others via websites. Application servers from BEA, IBM, Sun and others provided the foundation for internet-based applications, and browsers connected users with apps and content. As consumer hardware shifted from desktop PCs to portable laptops, and infrastructure became increasingly networked, the fundamental architectures of applications were re-thought.

By 2006, a new wave of core forces shaped the definition of applications. Software was moving from client-server to Software-as-a-Service. Companies like Salesforce.com and NetSuite led the way, with others like Concur transforming into SaaS leaders. In addition, hardware started to become software services in the form of Infrastructure-as-a-Service with the launch of Amazon Web Services S3 (Simple Storage Service) and then EC2 (Elastic Cloud Compute Service).

Smart, mobile devices began to emerge, and applications for these devices quickly followed. Apple entered the market with the iPhone in 2007, and a year later introduced the App Store. In addition, Google launched the Android ecosystem that year. Applications were purpose-built to run on these smart devices, and legacy applications were re-purposed to work in a mobile context.

As devices, including iPads, Kindles, Surfaces and others proliferated, application user interfaces became increasingly complex. Soon developers were creating applications that responsively adjusted to the type of device and use case they were supporting. Another major change of this past decade was the transition from typing and clicking, which had dominated the PC and Blackberry era, to touch as a dominant interface for humans and applications.

In 2016, we are on the cusp of a totally new era in how applications are built, managed and accessed by users. The most important aspect of this evolution is how applications are being redefined from “software programs” to “dataware learners.”

For decades, software has been ­programmed and designed to run in predictable ways. Over the next decade, dataware will be created through training a computer system with data that enables the system to continuously learn and make predictions based on new data/metadata, engineered features and algorithm-powered data models.

In short, software is programmed and predictable, while the new dataware is trained and predictive. We benefit from dataware all the time today in modern search, consumer services like Netflix and Spotify and fraud protection for our credit cards. But soon, every application will be an intelligent application.

Three major forces underlie the shift from software to dataware which necessitates a new “platform” for application development and operations and these forces are interrelated.

Application intelligence

Intelligent applications are the end product of this evolution. They leverage data, algorithms and ongoing learning to anticipate and improve interactions with the people and machines they interact with.

 

They combine three layers: innovative data and metadata stores, data intelligence systems (enabled by machine learning/AI) and the predictive intelligence that is expressed at an “application” layer. In addition, these layers are connected by a continual feedback loop that collects data at the points of interaction between machines and/or humans to continually improve the quality of the intelligent applications.

Microservices and serverless functions

Monolithic applications, even SaaS applications, are being deconstructed into components that are elastic building blocks for “macro-services.” Microservice building blocks can be simple or multi-dimensional, and they are expressed through Application Programming Interfaces (APIs). These APIs often communicate machine-to-machine, such as Twilio for communication or Microsoft’s Active Directory Service for identity. They also enable traditional applications to more easily “talk” or interact with new applications.

And, in the form of “bots,” they perform specific functions, like calling a car service or ordering a pizza via an underlying communication platform. A closely related and profound infrastructure trend is the emergence of event-driven, “serverless” application architectures. Serverless functions such as Amazon’s Lambda service or Google Functions leverage cloud infrastructure and containerized systems such as Docker.

At one level, these “serverless functions” are a form of microservice. But, they are separate, as they rely on data-driven events to trigger a “state-less” function to perform a specific task. These functions can even call intelligent applications or bots as part of a functional flow. These tasks can be connected and scaled to form real-time, intelligent applications and be delivered in a personalized way to end-users. Microservices, in their varying forms, will dominate how applications are built and “served” over the next decade.

Natural user interface

If touch was the last major evolution in interfaces, voice, vision and virtual interaction using a mix of our natural senses will be the major interfaces of the next decade. Voice is finally exploding with platforms like Alexa, Cortana and Siri. Amazon Alexa already has more than 1,000 voice-activated skills on its platform. And, as virtual and augmented reality continue to progress, voice and visual interfaces (looking at an object to direct an action) will dominate how people interact with applications.

Microsoft HoloLens and Samsung Gear are early examples of devices using visual interfaces. Even touch will evolve in both the physical sense through “chatbots” and the virtual sense, as we use hand controllers like those that come with a Valve/HTC Vive to interact with both our physical and virtual worlds. And especially in virtual environments, using a voice-activated service like Alexa to open and edit a document will feel natural.

What are the high-level implications of the evolution to intelligent applications powered by a dataware platform?

SaaS is not enough. The past 10 years in commercial software have been dominated by a shift to cloud-based, always-on SaaS applications. But, these applications are built in a monolithic (not microservices) manner and are generally programmed, versus trained. New commercial applications will emerge that will incorporate the intelligent applications framework, and usually be built on a microservices platform. Even those now “legacy” SaaS applications will try to modernize by building in data intelligence and microservices components.

Data access and usage rights are required. Intelligent applications are powered by data, metadata and intelligent data models (“learners”). Without access to the data and the right to use it to train models, dataware will not be possible. The best sources of data will be proprietary and differentiated. Companies that curate such data sources and build frequently used, intelligent applications will create a virtuous cycle and a sustainable competitive advantage. There will also be a lot of work and opportunity ahead in creating systems to ingest, clean, normalize and create intelligent data learners leveraging machine learning techniques.

New form factors will emerge. Natural user interfaces leveraging speech and vision are just beginning to influence new form factors like Amazon Echo, Microsoft HoloLens and Valve/HTC Vive. These multi-sense and machine-learning-powered form factors will continue to evolve over the next several years. Interestingly, the three mentioned above emerged from a mix of Seattle-based companies with roots in software, e-commerce and gaming!

The three major trends outlined here will help turn software applications into dataware learners over the next decade, and will shape the future of how man and machine interact. Intelligent applications will be data-driven, highly componentized, accessed via almost all of our senses and delivered in real time.

These applications and the devices used to interact with them, which may seem improbable to some today, will feel natural and inevitable to all by 2026 — if not sooner. Entrepreneurs and companies looking to build valuable services and software today need to keep these rapidly emerging trends in mind.

I remember debating with our portfolio companies in 2006 and 2007 whether or not to build products as SaaS and mobile-first on a cloud infrastructure. That ship has sailed. Today we encourage them to build applications powered by machine learning, microservices and voice/visual inputs.

Original article here.


standard

Is Anything Ever ‘Forgotten’ Online?

2016-08-13 - By 

When someone types your name into Google, suppose the first link points to a newspaper article about you going bankrupt 15 years ago, or to a YouTube video of you smoking cigarettes 20 years ago, or simply a webpage that includes personal information such as your current home address, your birth date, or your Social Security number. What can you do — besides cry?

Unlike those living the United States, Europeans actually have some recourse. The European Union’s “right to be forgotten” (RTBF) law allows EU residents to fill out an online form requesting that a search engine (such as Google) remove links that compromise their privacy or unjustly damage their reputation. A committee at the search company, primarily consisting of lawyers, will review your request, and then, if deemed appropriate, the site will no longer display those unwanted links when people search for your name.

But privacy efforts can backfire. A landmark example of this happened in 2003, when actress and singerBarbra Streisand sued a California couple who took aerial photographs of the entire length of the state’s coastline, which included Streisand’s Malibu estate. Streisand’s suit argued that her privacy had been violated, and tried to get the photos removed from the couple’s website so nobody could see them. But the lawsuit itself drew worldwide media attention; far more people saw the images of her home than would have through the couple’s online archive.

In today’s digital world, privacy is a regular topic of concern and controversy. If someone discovered the list of all the things people had asked to be “forgotten,” they could shine a spotlight on that sensitive information. Our research explored whether that was possible, and how it might happen. Our research has shown that hidden news articles can be unmasked with some hacking savvy and a moderate amount of financial resources.

Keeping the past in the past

The RTBF law does not require websites to take down the actual web pages containing the unwanted information. Rather, just the search engine links to those pages are removed, and only from results from searches for specific terms.

In most circumstances, this is perfectly fine. If you shoplifted 20 years ago, and people you have met recently do not suspect you shoplifted, it is very unlikely they would discover — without the aid of a search engine — that you ever shoplifted by simply browsing online content. By removing the link from Google’s results for searches of your name, your brief foray into shoplifting would be, for all intents and purposes, “forgotten.”

This seems like a practical solution to a real problem that many people are facing today. Google has received requests to remove more than 1.5 million links from specific search results and has removed 43 percent of them.

‘Hiding’ in plain sight

But our recent research has shown that a transparency activist or private investigator, with modest hacking skills and financial resources, can find newspaper articles that have been removed from search results and identify the people who requested those removals. This data-driven attack has three steps.

First, the searcher targets a particular online newspaper, such as the Spanish newspaper El Mundo, and uses automated software tools to download articles that may be subject to delisting (such as articles about financial or sexual misconduct). Second, he again uses automated tools to get his computer to extract the names mentioned in the downloaded articles. Third, he runs a program to query google.es with each of those names, to see if the corresponding article is in the google.es search results or not. If not, then it is most certainly a RTBF delisted link, and the corresponding name is the person who requested the delisting.

As a proof of concept, we did exactly this for a subset of articles from El Mundo, a Madrid-based daily newspaper we chose in part because one of our team speaks Spanish. From the subset of downloaded articles, we discovered two that are being delisted by google.es, along with the names of the corresponding requesters.

Using a third-party botnet to send the queries to Google from many different locations, and with moderate financial resources ($5,000 to $10,000), we believe the effort could cover all candidate articles in all major European newspapers. We estimate that 30 to 40 percent of the RTBF delisted links in the media, along with their corresponding requesters, could be discovered in this manner.

Lifting the veil

Armed with this information, the person could publish the requesters’ names and the corresponding links on a new website, naming those who have things they want forgotten and what it is they hope people won’t remember. Anyone seeking to find information on a new friend or business associate could visit this site — in addition to Google — and find out what, if anything, that person is trying to bury in the past. One such site already exists.

At present, European law only requires the links to be removed from country- or language-specific sites, such as google.fr and google.es. Visitors to google.com can still see everything. This is the source of a major European debate about whether the right to be forgotten should also require Google to remove links from searches on google.com. But because our approach does not involve using google.com, it would still work even if the laws were extended to cover google.com.

Should the right to be forgotten exist?

Even if delisted links to news stories can be discovered, and the identities of their requesters revealed, the RTBF law still serves a useful and important purpose for protecting personal privacy.

By some estimates, 95 percent of RTBF requests are not seeking to delist information that was in the news. Rather, people want to protect personal details such as their home address or sexual orientation, and even photos and videos that might compromise their privacy. These personal details typically appear in social media like Facebook or YouTube, or in profiling sites, such as profileengine.com. But finding these delisted links for social media is much more difficult because of the huge number of potentially relevant web pages to be investigated.

People should have the right to retain their privacy — particularly when it comes to things like home addresses or sexual orientation. But you may just have to accept that the world might not actually forget about the time when as a teenager when your friend challenged you to shoplift.

Original article here.


standard

$1 Trillion in IT Spending Moving to Cloud. How Much is on Waste? [Infographic]

2016-08-11 - By 

Gartner recently reported that by 2020, the “cloud shift” will affect more than $1 trillion in IT spending.

The shift comes from the confluence of IT spending on enterprise software, data center systems, and IT services all moving to the cloud.

With this enormous shift and change of practices comes a financial risk that is very real: your organization may be spending money on services you are not actually using. In other words, wasting money.

How big is the waste problem, exactly?

The 2016 Cloud Market

While Gartner’s $1 trillion number refers to the next 5 years, let’s take a step back and look just at the size of the market in 2016, where we can more easily predict spending habits.

The size of the 2016 cloud market, from that same Gartner study, is about $734 billion. Of that, $203.9 billion is spent on public cloud.

Public cloud spend is spread across a variety of application services, management and security services, and more (BPaaS, SaaS, PaaS, etc.) – all of which have their own sources of waste. In this post, let’s focus on the portion for which wasted spend is easiest to quantify: cloud infrastructure services (IaaS).

Breaking down IaaS Spending

Within the $22.4 billion spent on IaaS, about 2/3 of spending is on computer resources (rather than database or storage). From a recent survey we held – bolstered by our daily conversations with cloud users – we learned that about half of these compute resources are used for non-production purposes: that is, development, staging, testing, QA, and other behind-the-scenes work. The majority of servers used for these functions do not need to run 24 hours a day, 7 days a week. In fact, they’re generally only needed for a 40-hour workweek at most (even this assumes maximum efficiency with developers accessing these servers during their entire workdays).

Since most compute infrastructure is sold by the hour, that means that for the other 128 hours of the week, you’re paying for time you’re not using. Ouch.

All You Need to Do is Turn Out the Lights

A huge portion of IT spending could be eliminated simply by “turning out the lights” – that is, by stopping hourly servers when they are not needed, so you only pay for the hours you’re actually using. Luckily, this does not have to be a manual process. You can automatically schedule off times for your servers, to ensure they’re always off when you don’t need them (and to save you time!)

Original article here.


standard

Develop Android Apps Using MIT App Inventor

2016-08-08 - By 

There is a secret inventor inside each of us. Get your creative juices flowing and go ahead and develop an Android app or two. It is as easy as you think it is. Follow the detailed instructions given in this article, and you will have an Android app up and running in next to no time.

Imagine that you have come up with an idea for an app to address your own requirements, but due to lack of knowledge and information, don’t know where to begin. You could contact an Android developer, who would charge you for his services, and you would also risk having your idea being copied or stolen. You may also feel that you can’t develop the app yourself as you do not have the required programming and coding skills. But that’s not true. Let me tell you that you can develop Android apps on your own without any programming and coding. Here’s how:

An introduction to App Inventor
App Inventor is a tool that will convert your idea into a working application without the need for any prior coding or programming skills. App Inventor is the open source utility developed by Google in 2010 and, currently, it is being maintained by the Massachusetts Institute of Technology (MIT). It allows absolute beginners in computer science to develop Android applications. It provides you with a graphical user interface with all the necessary components required to build Android apps. You just need to drag and drop the components in the block editor. Each block is an independent action, which you need to arrange in a logical sequence in order to result in some action.

App Inventor features
App Inventor is a feature-rich open source Android app development tool. Here are its features.

1. Open source: Being open source, the tool is free for everyone and you don’t need to purchase anything. Open source software also gives you the freedom to customise it as per your own requirements.

2. Browser based: App Inventor runs on your browser; hence, you don’t need to install anything onto your computer. You just need to log in to your account using your email and password credentials.

3. Stored on the cloud: All your app related projects are stored on Google Cloud; therefore, you need not keep anything on your laptop or computer. Being browser based, it allows you to log in to your account from any device and all your work is in sync with the cloud database.

4. Real-time testing: App Inventor provides a standalone emulator that enables you to view the behaviour of your apps on a virtual machine. Complete your project and see it running on the emulator in real-time.

5. No coding required: As mentioned earlier, it is a tool with a graphical user interface, with all the built-in component blocks and logical blocks. You need to assemble multiple blocks together to result in some behavioural action.

6. Huge developer community: You will meet like-minded developers from across the world. You can post your queries regarding certain topics and these will be answered quickly. The community is very supportive and knowledged.

System requirements
Given below are the system requirements to run App Inventor from your computer or laptop:
1. A valid Google account, as you need to log in using your credentials.
2. A working Internet connection, as you need to log in to the cloud-based browser that’s compatible with App Inventor; hence, a working Internet connection is a must.
3. App Inventor is compatible with Google Chrome 29+, Safari 5+ and Firefox 23+.
4. An Android phone to test the final, developed application.

Beginning with App Inventor
Hope you have everything to begin your journey of Android app development with App Inventor. Follow the steps below to make your first project.
1.   Open your Google Chrome/Safari/Firefox browser and open the Google home page.
2.   Using the search box, search for App Inventor.
3.   Choose the very first link. It will redirect you to the App Inventor project’s main page. This page contains all the resources and tutorials related to App Inventor. We will explore it later. For now, click on the Create button on the top right corner.
4.   The next page will ask for your Google account credentials. Enter your user name and password that you use for your Gmail application.
5.   Click on the Sign in button, and you will successfully reach the App Inventor app development page. If the page asks you to confirm your approval of certain usage or policy terms, agree with them all. It is all safe and is mandatory if you want to move ahead.
6.   If all is done correctly, you should see a page similar to what’s shown in Figure 5.
7.   Congratulations! You have successfully set up all the necessary things and can now develop your first application.

For step-by-step instructions to develop your first App Inventor application, and to see the full original article, go here.

 


standard

New Cloud Hosting & Server Software Marketplace – AppFerret

2016-07-28 - By 

AppFerret is currently in beta and will be launching soon.

The idea behind AppFerret is a simple idea: to provide a system that connects Software Developers that build server-class software with Clients that need help solving real-world problems; sometimes really Big problems.

This is an exciting time, a golden age of amazing developments in software and computing resources like cloud computing.  But how do you sort through all of the options, are even find out what options exist?

AppFerret provides a Marketplace to Sell, Support, Cloud-Execute, Train and Review server apps – especially Big Data and Analytics systems.

Our plans are pretty ambitious, but we believe that there is a genuine need for our service and we are open to ideas and suggestions from our community of developers and users.


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);