Posted On:August 2016 - AppFerret

standard

Data Visualization 101: How to Choose the Right Chart or Graph for Your Data

2016-08-31 - By 

You and I sift through a lot of data for our jobs. Data about website performance, sales performance, product adoption, customer service, marketing campaign results … the list goes on. 

When you manage multiple content assets, such as social media or a blog, with multiple sources of data, it can get overwhelming. What should you be tracking? What actually matters? How do you visualize and analyze the data so you can extract insights and actionable information? 

More importantly, how can you make reporting more efficient when you’re busy working on multiple projects at once?

Download our free guide here for complete data visualization guidelines and tips.

One of the struggles that slows down my own reporting and analysis is understanding what type of chart to use — and why. That’s because choosing the wrong type of chart or simply defaulting to the most common type of visualization could cause confusion with the viewer or lead to mistaken data interpretation. 

Go here for examples of charts.


standard

The Growth of Cloud Computing Markets

2016-08-31 - By 

The cloud computing markets are growing at a rapid pace. People prefer to incorporate cloud technology in their day to day work because of its convenience. According to the reports of IDC, public cloud spending is expected to grow to more than $195 billion in 2020 from $96.5 billion in 2016.

Cloud computing is growing at a robust rate at the moment and it has a promising future. If you are making plans to invest on a technology based company, you are provided with a variety of advantages, thanks to cloud computing. 
 
What Exactly is Cloud Computing? 
 
The concept of cloud computing is somewhat abstract. However, it is not as difficult to understand the heart of it. The service providers that offer cloud services help people to store applications and data remotely on servers and then access those files through the internet at any time they want. In other words, you can simply upload videos, documents and photos to the cloud and retrieve them at your convenience. 
 
Cloud computing consists of three main services. They include: 
 
• Software as a Service (SaaS) – Software as a Service is associated with the licensure of applications to customers and it is deployed online. The licenses are provided in two different mechanisms as on demand or pay as you go model. 
 
• Infrastructure as a Service (IaaS) – Infrastructure as a Service includes offering everything from servers to operating systems through IP based connectivity. 
 
• Platform as a Service (PaaS) – Platform as a Service can be considered as the most complex layer of cloud based computing. It offers a platform, which can be used to create software. The developed software is then delivered to end customers through the Internet. 
 
What Factors Are Responsible for the Growth of Cloud Computing Markets?
 
The development of cloud based services has delivered a variety of benefits for companies that belong to many different sectors. For instance, companies have the opportunity to use software from any device by just connecting to the cloud. The connection can easily be established through an Internet browser or a native app. This can help individuals to carry over their settings and files to other compatible devices in a seamless manner. 
 
As a result of cloud computing, people are able to check for their emails on any computer or mobile device. In fact, most people prefer to store their files on cloud services such as Google Drive and Dropbox. On the other hand, some people use cloud services to back up their photos, music and files. This way they can make sure all their files are immediately available through the cloud service when the hard drives crash. 
 
Cloud computing markets can be divided into several categories. All these markets have seen an unprecedented growth over the past couple of years. These markets include: 
 
• Centralized Data Centers – Along with the increasing demand for cloud computing, a lot of centralized data centers have been established. Necessary plans have been taken in order to construct many more in the future as well. Some of the industry giants such as Amazon have figured out that the future lies within the cloud. As a result, they are investing billions of dollars on the development of data centers throughout the country. According to the reports of Reuters, Amazon is planning to make cloud computing as the largest part of its business in the future. 
 
• Security – Security is one of the major concerns associated with cloud computing. People tend to store their sensitive information on the cloud and they are always concerned about the security. As a result, a lot of companies have started offering cloud security solutions for the people in need. Qualys and Websense can be considered as perfect examples of the aforementioned trend. 
 
• Storage – Modern world people do not prefer to store information on local storage devices such as hard drives. Instead, they prefer to seek the assistance of cloud based solutions as they can save the files to a remote database. This can be considered as the main reason behind the popularity of Vendors such as Dropbox and Mozy. Now people can easily move all their files to a secure storage cloud in a convenient manner. It is even possible to sync the files stored in your hard drive with the cloud automatically. 
 
• Cloud Based Applications – Cloud computing offers a variety of services for the people as well. Software as a Service holds a prominent place out of them. Taleo, SalesForce and Keynote Systems are some world famous companies, which offer their services through cloud computing technology. A lot of other companies are making plans to offer their services to the people in need via cloud computing. This is one of the most convenient methods available for the software developers to let their end users access the applications with slow internet connections. 
 
• Virtualization Technology – Cloud based solutions are in a position to deliver some sort of application technology or desktop virtualization. VMware and Citrix are ideal examples for leading vendors who offer virtualization technology through cloud. 
 
Apparently, based on the demand of the public, a lot of money is being invested on cloud computing markets by the industrial giants. According to the financial predictions of IDC, revenue generated by worldwide cloud computing markets would rise to a compound annual growth rate (CAGR) of 20.4% over the 2015-2020 forecast period. Currently the industries at the forefront of public cloud services spending consist of discrete manufacturing, banking, and professional services, which makes for nearly a third of total worldwide revenues in 2016. 
 
In respect to the five year period predicted by IDC, the industries that are expected to reach the fastest revenue growth are media, telecommunications, and retail. Nevertheless, all 20 industries who share profiles in the spending guide will see revenue growth of more than 100% over the forecast period. The cloud computing markets are thus rapidly expanding on a daily basis and it would be a great idea to invest on these markets, given that you have this sort of budget.
 
Original article here.
 

standard

GE Supplies IoT Developer Kit For Predix

2016-08-30 - By 

GE Digital is making it easier for IoT developers to tap into Predix analytics and use machine learning with the release of a new hardware and software kit.

The internet of things took a step toward becoming better at management as GE released its Predix Developer Kit on July 26. It’s a bundle of GE-supplied hardware and software that makes it easier to collect machine data from the internet.

Developers supply an IP address, an Ethernet connection, an electrical socket, and enough programming to indicate what data they want to collect. The Predix Kit appliance automatically establishes the connection, registers its presence with a central version of Predix, and starts transmitting time-series data that may have to do with the temperature, pressure, speed, flow readings, or other data from the device sensors that it’s attached to.

GE Digital owns and operates the hardware and software combination on behalf of the user, who subscribes to its output. That’s for the most secure form of IoT device data collection, based on a GE Field Agent (PDF) piece of hardware using a ruggedized PC. Developers may also opt to use an Intel Edison board with a dual core CPU and WiFi connection, or a Raspberry Pi mother board running an ARM CPU.

Without a Predix Developer’s Kit, an IoT programmer would set up a board to connect to a device, download software for operation of such a device, and build screens to visualize those operations. Then a programmer would have to do the type of programming that can recognize the device monitored and the hardware on which it was sitting, a task likely to take many hours, Mark Bernardo, professional services leader for GE Digital, Americas, wrote in ablog posted Tuesday.

“Predix Kits can reduce that time to less than 15 minutes,” Bernardo wrote.

Predix is GE Digital’s analytics system that can capture and store machine data and then apply analytics to it to learn from it and predict possible trouble points or future failures and how they can be addressed.

It became generally available Feb. 22 at the Mobile World Congress trade show in Barcelona.

Included in the kit is the Predix UI or user interface that GE Digital released in early March, a week after the Predix analytics system became generally available. It includes a set of software components that can be used by an interface designer or developer to create an IoT application that makes use of Predix services.

In addition, GE Digital announced Tuesday that it has opened the first “digital foundry” for IoT applications in Paris. Three more will be operating by the end of the year. Market researcher Evans Data has reported that IoT is attracting developers at a breakneck pace; there are already 6.2 million developing for it worldwide.

[Want to see how GE Predix works with the cloud? Read Microsoft, GE Partnership Targets Industrial Cloud.]

GE has seen 500 developers sign onto the Predix.io website as programmers each week since it became available in February. It counts 12,000 Predix developers as of today, and expects 20,000 by the end of the year.

GE Digital is using Predix to help it better manage electricity-generating turbines in aging Italian power plants.

Bernardo wrote that the Predix Kits would also be good for building applications to collect data from and manage solar energy projects. They can be used in creating mine safety systems that monitor oxygen levels in operations deep underground or monitor other factors in a difficult to work in environment.

They can also be used to help create smart buildings that can track movements and room temperatures inside to improve comfort and security.

Original article here.


standard

Why cloud is killing traditional ERP systems

2016-08-30 - By 

Cloud ERP systems are continuing to accelerate, leading to the slow death of traditional ERP

ERP is business process management software that allows an organisation to use a system of integrated applications to manage the business and automate many back office functions related to technology, services and human resources.

Software giant Oracle announced last week its intent to buy cloud computing pioneer NetSuite in a deal valued at $9.3 billion.

Netsuite is one of the major cloud ERP providers and Oracle’s acquisition of the company “serves as further reinforcement of the accepted fact that ERP systems are moving to the cloud”, according to Sabby Gill, executive vice president for international at Epicor Software.

The implementation of cloud ERP is growing, as research and analytics firm Forrester suggests: actual and planned ERP cloud replacement activity grew from 24% in 2013 to 43% in 2015.

>See also: Are large organisations finally embracing ERP in the cloud?

But what are the reasons behind this exponential growth? Why are companies choosing to abandon traditional ERP and adopt the cloud (or hybrid) version? What are the catalysts behind cloud ERP’s rise?

Supporting new business models is key a factor allowing companies can scale to new customer demands, while cloud ERP systems save up to six times the amount of capital invested cumulatively over traditional ERP systems.

Cloud ERP systems are far better suited to handling more complex manufacturing lines, where market speed is just as important as production scale.

Versatility is also central to cloud ERP.

Customers don’t have to be put in one category, and cloud ERP takes advantage of this by embracing the variety and changeability of the modern customer.

>See also: Four reasons why 2016 marks the end of the road for traditional ERP systems

Its implementation is to embrace the digital transformation, Gill believes, which will supplement core business growth.

Cloud ERP offers companies flexibility and mobility in a volatile, undulating market.

The choice to adopt cloud ERP is not a difficult one, and the adoption of the new technology will lead to new frontiers for businesses.

Original article here.


standard

Azure Stack will be a disruptive, game changing technology

2016-08-29 - By 

Few companies will use pure public or private cloud computing and certainly no company should miss the opportunity to leverage a combination. Hybrids of private and public cloud, multiple public cloud services and non-cloud services will serve the needs of more companies than any single cloud model and so it’s important that companies stop and consider their long term cloud needs and strategy.

Providing insight into the future of cloud computing is something that Pulsant has a lot of experience in and our focus on hybrid IT and hybrid services allows us to see where the adoption of public and private cloud benefits our customers’ strategies and requirements.

Since so much of IT’s focus in the recent past (and in truth, even now) has been on private cloud, any analytics that show the growth of public cloud give us a sense of how the hybrid idea will progress. The business use of SaaS is increasingly driving a hybrid model by default. Much of hybrid cloud use comes because of initial trials of public cloud services. As business users adopt more public cloud, SaaS in particular, they will need more support from companies, such as Pulsant, to help provide solutions for true integration and governance of their cloud.

Game changer

The challenge, as always in the cloud arena, is that there is no strict definition of the term ‘hybrid.’ There has been, until recently, a distinct lack of vendors and service providers able to offer simple solutions to some of the day-to-day challenges faced by most companies who are trying to develop a cloud strategy. Challenges include those of governance, security, consistent experiences between private and public services and the ability to simply ‘build once’ and ‘operate everywhere’.

Enter Azure Stack — it’s not often that I use language like “game changing” and “disruptive technology” but in the case of Azure Stack I don’t think these terms can be understated. For the first time you have a service provider (for that’s what Microsoft is becoming) that is addressing what hybrid IT really means and how to make it simple and easy to use.

So what is Azure Stack?

Few companies will use pure public or private cloud computing and certainly no company should miss the opportunity to leverage a combination. Hybrids of private and public cloud, multiple public cloud services and non-cloud services will serve the needs of more companies than any single cloud model and so it’s important that companies stop and consider their long term cloud needs and strategy.

Providing insight into the future of cloud computing is something that Pulsant has a lot of experience in and our focus on hybrid IT and hybrid services allows us to see where the adoption of public and private cloud benefits our customers’ strategies and requirements.

Since so much of IT’s focus in the recent past (and in truth, even now) has been on private cloud, any analytics that show the growth of public cloud give us a sense of how the hybrid idea will progress. The business use of SaaS is increasingly driving a hybrid model by default. Much of hybrid cloud use comes because of initial trials of public cloud services. As business users adopt more public cloud, SaaS in particular, they will need more support from companies, such as Pulsant, to help provide solutions for true integration and governance of their cloud.

Game changer

The challenge, as always in the cloud arena, is that there is no strict definition of the term ‘hybrid.’ There has been, until recently, a distinct lack of vendors and service providers able to offer simple solutions to some of the day-to-day challenges faced by most companies who are trying to develop a cloud strategy. Challenges include those of governance, security, consistent experiences between private and public services and the ability to simply ‘build once’ and ‘operate everywhere’.

Enter Azure Stack — it’s not often that I use language like “game changing” and “disruptive technology” but in the case of Azure Stack I don’t think these terms can be understated. For the first time you have a service provider (for that’s what Microsoft is becoming) that is addressing what hybrid IT really means and how to make it simple and easy to use.

So what is Azure Stack?

This is the simple question that completely differentiates Azure (public) / Azure Stack from a traditional VM-based environment. When you understand this, you understand how Azure Stack is a disruptive and game changing technology.

For a long time now application scalability has been achieved by simply adding more servers (memory, processors, storage, etc.) If there was a need for more capacity the answer was “add more servers”. Ten years ago, that still meant buying another physical server and putting it in a rack. With virtualisation (VMware, Hyper-V, OpenStack) it has been greatly simplified, with the ability to simply “spin-up” another virtual machine on request. Even this is now being superseded by the advent of cloud technologies.

Virtualisation may have freed companies from the need for having to buy and own hardware (capital drain and the constant need for upgrades) but with virtualisation companies still have the problem of the overhead of an operating system (Windows/Linux), possibly a core application (e.g. Microsoft SQL) and, most annoyingly, a raft of servers and software to patch, maintain and manage. Even with virtualisation there is a lot of overhead required to run applications as is the case when running dozens of “virtual machines” to host the applications and services being used.

The public cloud takes the next step and allows the aggregation of things like CPUs, storage, networking, database tiers, web tiers and simply allows a company to be allocated the amount of capacity it needs and applications are given the necessary resources dynamically. More importantly, resources can be added and removed at a moment’s notice without the need to add VMs or remove them. This in turn means less ‘virtual machines’ to patch and manage and so less overhead.

The point of Azure Stack is that it takes the benefits of public cloud and takes the next logical step in this journey — to bring the exact capabilities and services into your (private) data centre. This will enable a host of new ideas letting companies develop a whole Azure Stack ecosystem where:

  • Hosting companies can sell private Azure Services direct from their datacentres
  • System integrators can design, deploy and operate Azure solution once but deliver in both private and public clouds
  • ISVs can write Azure-compatible software once and deploy in both private and public clouds
  • Managed service providers can deploy, customise and operate Azure Stack themselves

I started by making the comment that I thought Azure Stack will be a disruptive, game changing technology for Pulsant and its customers. I believe that it will completely change how datacentres will manage large scale applications, and even address dev/test and highly secured and scalable apps. It will be how hosting companies like Pulsant will offer true hybrid cloud services in the future.

Original article here.


standard

Happy Birthday Linux! Celebrating 25 years of the open source Linux operating system

2016-08-28 - By 

This last week marks the 25th birthday of Linux, the free operating system which now sits at the core of our modern world.

On this day, 25 years ago, Linus Torvalds started what would become one of the most prominent examples of free and open-source collaboration – and it all started with a simple message on the comp.os.minx message board.

“ “I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things),” Torvald wrote.

Since then Linux has grown into the open source beast that it is today, with everything from companies and supercomputers to mobile phones relying on the OS.

In celebration of the 25 year milestone, CBR talked to a company known for pioneering the original open source model – Red Hat. Sitting down with Ellie Burns,  Martin Percival, Senior Solutions Architect at Red Hat, talked about past, current and future innovations being driven by Linux and open source technology.

 

EB: It is the 25th birthday of Linux – or is it? There is some confusion over whether it is Aug 25th or Oct 5th, when the first public release was made. So are you celebrating today?

MP: August 25th 1991 was the day that Linux was first announced to the public which is why Red Hat is celebrating it today. Of course people do argue over this date, as the first kernel (0.1) was released in September 1991 and Linus Torvalds started working on Linux way before August 25th –  however, I prefer to stick to the day the world found out it existed.

 

EB: What do you think has been the biggest mile-stone or breakthrough in Linux’s 25 year history?

MP: The creation of Linux itself was almost the biggest breakthrough. The fact that we could work together on an operating system that was available for the whole world to use was a radical idea. Linux offered a solid platform that people could not only rely on, but could also modify for their own needs. Since then Linux has enabled many technology innovations.  Over the years, Linux has continued to evolve, keeping up with hardware developments and new thinking in the way that we use software. This has kept it at the forefront of usable, useful operating systems.

Since its launch, a huge milestone for Linux was the rise of professional open source and the resulting acceptance and implementation of Linux in the enterprise. Organisations such as Red Hat made this possible by adapting Linux to make it work smoothly and securely for the enterprise and its needs. The implementation of virtualisation and container support in Linux and its use in most major cloud solutions are significant innovations that have radically changed how businesses can approach their computing problems.

 

EB: How has Linux changed the world since its inception in 1991?

MP: Linux has become a powerful driver in the ways in which we conduct our daily lives. Almost every item around us that uses technology from phones, to supercomputers to TVs and much more is running on Linux. Often we do not even notice it, for example, when you walk in central London, black cabs with advertising screens are at every corner. These screens are running on Linux but I am sure most people wouldn’t know that. Linux is everywhere, even in the most unexpected and benign objects. There are all sorts of unusual uses for Linux, in fact, smart fridges (with screens showing information such as temperature or the current time), play stations, IoT devices and systems as well as traffic control systems are all examples of common objects enabled by Linux.

Linux has enabled technology to be weaved into the world around us as more devices and objects are increasingly using complex technology. It has helped change the world into an environment where everything and everyone is connected together through technology and which is highly influenced by social media. Linux has been at the centre of all these changes in people’s lives for the last 25 years..

It has not only changed society, but also the enterprise and IT. Linux long ago reached the tipping point of being accepted in the enterprise datacenter. The work carried out by Red Hat and its business model has largely made this possible and accelerated this process, by giving organisations the kind of quality and support assurance for Linux that they would demand for any other product.

 

EB: What do you think is the biggest innovation that Linux made possible?

MP: It has to be “the cloud”. Before Linux, enterprises relied on hardware from Unix vendors such as SGI or HP to be able to run their own computing power. This bundling of hardware and operating system was very expensive, as every manufacturer locked the two together in its own unique and very profitable way. In these circumstances, it’s hard to imagine how the current cloud computing landscape could have ever started. The sheer cost of filling a cloud datacentre would have been prohibitive.

The beauty of Linux was that it grew initially on commodity hardware, bringing ever increasing processor power to the user without the lock-in of the proprietary model. This, in turn, has given organisations the ability to install more machines at a much lower cost, making it easier to imagine, and finally realise, the world of cloud computing today; where countless servers, all running Linux, provide the computing backbone for our lives.

Now we see Linux being used, not only as an operating system in its own right, but also as the foundation for private cloud offerings like OpenStack and container runtimes like Red Hat’s Openshift Container Platform.

 

EB: What does the future hold for Linux?

MP: The future for Linux is probably tied up with our lifestyle. Today our lives are closely linked and influenced by social media and everything is accessible publicly. This is thanks to the Internet and the information we can easily share – and this sharing is made possible, in many cases, by Linux operating systems.

The open source community picks up new ways of thinking very quickly and is efficient at driving software change to match that thinking. Invariably, as contributors write this code, Linux also evolves as an operating system, handling new hardware and ways of working that never existed 25 years ago.

In many ways, Linux is the ultimate adaptable creature, always matching functionality to requirements. It has enabled drastic IT innovations already and many more hardware advances will be happening, especially with the rise of IoT and cloud computing. More IT challenges will no doubt appear within the business world and the community will constantly have to adapt to these changes and challenges in order to make IT simpler, safer and better adapted to people’s and business’ needs.

 

EB: What do you think Linux will look like in 25 years’ time, at the grand old age of 50?

MP: In 25 years, Linux has helped us accomplish many innovations that no one would have even thought of back in 1991, let alone thought possible, and in the next 25 years, we will see even greater changes as IT evolves. In the future Linux will allow us to be even more connected perhaps in different ways to the way we interact today. I am convinced Linux will be alive and well and be more embedded in our lives than ever before; perhaps parts of Linux will be running in our own bodies and there will be all sorts of artificial intelligence in play around us! 

Linux remains well-poised to continue to lead IT innovation for the enterprise, but to remain leader for the next 25 years, Linux will need to focus on:

Security

Linux will serve as the frontline in enterprise security. We will diagnose the earliest lines of codes to detect potential flaws and vulnerabilities in order to prevent any damage.Hardware advances

Linux is the standardiser for the eco-system of chipsets and hardware approaches and the community will have to continue to adapt as more innovations arise

Linux containers

This revolution will shake up Linux and open source in the next 25 years. Linux will drive towards scalability at an almost infinitesimal level.

The “next” next

Linux will be renowned as the platform for innovation to identify “what’s next”. Linux will constantly adapt components in the kernel and add new ones to be constantly ahead of IT innovations and the point of reference of the future of IT whatever it may be.

Original article here.


standard

18 Software Documentation Tools that Do The Hard Work For You

2016-08-28 - By 

Without documentation, software is just a black box. And black boxes aren’t anywhere near as useful as they could be because their inner workings are hidden from those who need them in the open.

Software documentation turns your software into a glass box by explaining to users and developers how it operates or is used.

You’ve seen documentation tools before, but if you need a refresher, here are examples of 18 tools that can help any software shop – business2community.


standard

These R packages import sports, weather, stock data and more

2016-08-27 - By 

There are lots of good reasons you might want to analyze public data, from detecting salary trends in government data to uncovering insights about a potential investment (or your favorite sports team).

But before you can run analyses and visualize trends, you need to have the data. The packages listed below make it easy to find economic, sports, weather, political and other publicly available data and import it directly into R — in a format that’s ready for you to work your analytics magic.

Packages that are on CRAN can be installed on your system by using the R command install.packages("packageName") — you only need to run this once. GitHub packages are best installed with the devtools package — install that once with install.packages("devtools") and then use that to install packages from GitHub using the formatdevtools::install_github("repositoryName/packageName"). Once installed, you can load a package into your working session once each session using the formatlibrary("packageName").

Some of the sample code below comes from package documentation or blog posts by package authors. For more information about a package, you can runhelp(package="packageName") in R to get info on functions included in the package and, if available, links to package vignettes (R-speak for additional documentation). To see sample code for a particular function, tryexample(topic="functionName", package="packageName") or simply ?functionName for all available help about a function including any sample code (not all documentation includes samples).

For more useful R packages, see Great R Packages for data import, wrangling and visualization.

R packages to import public data

PACKAGECATEGORYDESCRIPTIONSAMPLE CODEMORE INFO
blscrapeREconomics, GovernmentFor specific information about U.S. salaries and employment info, the Bureau of Labor Statistics offers a wealth of data available via this new package. blsAPIpackage is another option. CRAN.bls_api(c(“LEU0254530800”, “LEU0254530600”),
startyear = 2000, endyear = 2015)
Blog post by package author
FredRFinance, GovernmentIf you’re interested just in Fed data, FredR can access data from the Federal Reserve Economic Data API, including 240,000 US and international data sets from 77 sources.Free API key needed. GitHub.fred <- FredR(api.key)
fred$series.search(“GDP”)
gdp <- fred$series.observations(series_id = ‘GDPC1’)
Project’s GitHub page
quantmodFinance, GovernmentThis package is designed for financial modelling but also has functions to easily pull data from Google Finance, Yahoo Finance and the St. Louis Federal Reserve (FRED). CRAN.getSymbols(“DEXJPUS”,src=”FRED”)Intro on getting data
censusapiGovernmentThere are several other R packages that work with data from the U.S. Census, but this aims to be complete and offer data from all the bureau’s APIs, not just from one or two surveys. API key required. GitHub.mydata <- getCensus(name=”acs5″, vintage=2014,
key=mycensuskey,
vars=c(“NAME”, “B01001_001E”, “B19013_001E”),
region=”congressional district:*”, regionin=”state:36″)
This Urban Institute presentationhas more details; theproject GitHub pageoffers some basics.
RSocrataGovernmentPull data from any municipality that uses the Socrata data platform. Created by the City of Chicago data team. CRAN.mydata <- read.socrata(
“https://data.cityofchicago.org/
Transportation/Towed-Vehicles/ygr5-vcbg”)
RSocrata blog post
forbesListRMiscA bit of a niche offering, this taps into lists maintained by Forbes including largest private companies, top business schools and top venture capitalists. GitHub.#top venture capitalists 2012-2016
mydata <-
get_years_forbes_list_data(years = 2012:2016,
list_name = “Top VCs”)
See theproject GitHub page. You may need to manually load the tidyr package for code to work.
pollstRPoliticsThis package pulls political polling data from the Huffington Post Pollster API. CRAN.elec_2016_polls <- pollster_chart_data(
“2016-general-election-trump-vs-clinton”)
See theIntro vignette
LahmanSportsR interface for the famed Lahman baseball database. CRAN.batavg <- battingStats()Blog postHacking the new Lahman Package 4.0-1 with RStudio
stattleshipRSportsStattleship offers NFL, NBA, NHL and MLB game data via a partnership with Gracenote. API key (currently still free) needed. GitHub.set_token(“your-API-token”)
sport <- ‘baseball’
league <- ‘mlb’
ep <- ‘game_logs’
q_body <- list(team_id=’mlb-bos’, status=’ended’,
interval_type=’regularseason’)
gls <- ss_get_result(sport=sport, league=league,
ep=ep, query=q_body, walk=TRUE)
game_logs <- do.call(‘rbind’,
lapply(gls, function(x) x$game_logs))
See theStattleship blog post
weatherDataWeatherPull historical weather data from cities/airports around the world. CRAN. If you have trouble pulling data, especially on a Mac, try uninstalling and re-installing a different version with the codeinstall_github("ozagordi/weatherData")mydata <- getWeatherForDate(“BOS”, “2016-08-01”,
end_date=”2016-08-15″)
See thispost by the package author.

 

Original article here.


standard

A VC explains why event-driven SaaS is the workflow of the future

2016-08-26 - By 

A senior SaaS executive once told me, “Reports sell software.” In a top down sale, that’s absolutely true. The CEO wants better predictability of bookings, so she’ll buy a CRM tool to gather the data. Classically, software has been built for that mantra.

First, a company buys a database. The sales people, marketers or customer care staff continue working as normal. But after the purchase, these teams are burdened with an additional step of updating the database when they’ve finished their work, so a report can be generated.

But this design has an agency problem. The employees investing the marginal effort see very little gain. This agency problem challenges the effectiveness of the software in three ways.

First, managers must motivate employees to update the database. Second, since employees report data retroactively, the database is always out of date, undermining the accuracy of the reports. Third, the benefit of the software to employees is only visible months or years after filling up the database, when an employee can review the history of a customer interaction for example.

In bottoms up sales, workflow sells software. And new SaaS companies who aim to displace incumbent systems of record will architect their products in a radically different way. They will be event-driven SaaS companies.

 

Event driven SaaS products consume events from data sources, data sources like social media, news, analytics data, marketing data, customer support data, sales data. All of these events are ingested via API and committed to the database. Day one, these new systems of record are filling themselves with data.

Using this information, they prioritize and inform work to aid their teams be more effective. That can be by prioritizing which customers to speak with, automatically answering customer support queries or any number of things not yet invented.

Critically, there’s a feedback loop. The users’ actions are themselves events that feed back into the database. Separately, the system generates a similar report.

Event-driven SaaS products mitigate and ideally eliminate the agency problem of classic software. Users benefit directly from using it. The reporting is a by-product of an optimized workflow, and coincidentally is much more accurate than a classical system.

This agency problem the heart of the adoption challenges of classical software deployments, and in particular of the dominant systems of record in the market today. The next generation of multi-billion dollar SaaS platforms, the startups who will displace incumbents, will do it with event-driven architectures and optimized workflows.

Original article here.


standard

Is Most Published Research Wrong? (video)

2016-08-26 - By 

It sounds crazy but there are logical reasons why the majority of published research literature is false. This video digs into the reasons for this and what’s being done about it.

Original YouTube link is here.


standard

ARM Wrestles Its Way Into Supercomputing

2016-08-25 - By 

The designer of the chips that run most of the world’s mobile devices has announced its first dedicated processor for use in supercomputers.

The British company ARM Holdings, which was recently acquired by the Japanese telecom and Internet company SoftBank, has announced a new kind of chip architecture dedicated to high-performance computing. The new designs use what’s known as vector processing to work with large quantities of data simultaneously, making them well suited to applications such as financial and scientific computing.

This isn’t ARM’s first association with supercomputers. Earlier this year, Fujitsu announced that it plans to build a successor to the Project K supercomputer, which is housed at the Riken Advanced Institute for Computational Science, using ARM chips. In fact, it was announced today that the new Post-K machine will be the first to license the newly announced ARM architecture.

ARM has built a reputation for building processors known for their energy efficiency. That’s why they’ve proven so popular for mobile devices—they extend battery life in smartphones and tablets. Among the companies that license ARM’s designs are Apple, Qualcomm, and Nvidia. But the company’s energy-efficient chips also create less heat and use less power, which are both desirable attributes in large-scale processing applications such as supercomputers.

Intel will be worried by the purchase. The once-dominant chipmaker missed the boat on chips for mobile devices, allowing ARM to dominate the sector. But until recently it’s always been a leading player in the supercomputer arena. Now the world’s fastest supercomputer is built using Chinese-made chips, and clearly ARM plans to give it a run for its money, too.

It remains to be seen how successful ARM-powered supercomputers will be, though. The first big test will come when Fujitsu’s Post-K machine is turned on, which is expected in 2020. Intel will be watching carefully the whole way.

Original article here.


standard

The 100 Best Free Google Chrome Extensions

2016-08-25 - By 

It’s been an up and down couple of years for Google’s Chrome Web browser.

When we first did a version of this story in January 2015, Chrome had about 22.65 percent of the browser market worldwide, according to Net Applications. As of July 2016, Chrome has 50.95 percent—it crossed paths with Microsoft Internet Explorer in March, when both hit 39 percent. IE continues to dwindle, as does Firefox and Safari. Only Chrome and Microsoft’s new Edge browser have gained, but Edge is only at 5.09 percent of the market.

Then it lost some kudos—from us. After several years as PCMag’s favorite browser, a resurgent Firefox took our Editors’ Choice award. The reason: Chrome lags in graphics hardware acceleration, and it isn’t exactly known for respecting user privacy (just like its parent company).

That said, Chrome remains a four-star tour de force for Web surfing, with full HTML5 support and speedy JavaScript performance. Obviously, there is no denying its popularity. And, like Firefox before it, it’s got support for extensions that make it even better. Its library of extras, found at the Chrome Web Store, is more than rival Firefox has had for years. Also, the store has add-ons to provide quick access to just about every Web app imaginable.

Rather than having you stumble blindly through the store to find the best add-ons, we’ve compiled a list of 100 you should consider. Several are unique to Google and its services (such as Gmail), which isn’t surprising considering who made Chrome. Most extensions work across operating systems, so you can try them on any desktop platform; there may be some versions that work on the mobile Chrome, too.

All of these extensions are free; there’s no harm in giving them all a try—you can easily disable or remove them by typing chrome://extensions/ into the Chrome address bar and right-click an extension’s icon in the toolbar to remove it. As of Chrome version 49, every extension must have a toolbar icon; you can hide them without uninstalling the extension with a right-click and selecting “Hide in Chrome Menu.” You can’t get rid of the icons for good.

Read on for our favorites here, and let us know if we missed a great one!


standard

Infographic: A closer look at the fintech industry

2016-08-19 - By 

Financial Technology, also known as FinTech has made robust strides over the last five years. It is becoming a pivotal part of the global economy. Being the technology that is meant to make financial services more efficient and effective for all, FinTech is just getting started. We should expect more growth in the sector, this fiscal year, and the rest to come. Check out the below infographic to take a closer look at the fintech industry.

Financial and technology gurus have premeditated the exponential growth. A good example is Bitcoin and the other cryptocurrencies. The world is embracing a new way of transacting and in the next few years, the traditional methods of doing business may become extinct.

The rise of FinTech is almost legendary. In 2010, companies were only testing the waters, everyone waiting to see what the industry would offer. By 2013, the sector makes a tremendous growth, moving in investments from $3Billion to over $12 Billion in 2014. The growth did not stop at that point. By the end of Q2 of 2015, $12.7 Billion more had been invested. The growth experienced by FinTech is one that has yet to be seen before.

FinTech’s favorite areas of focus include payment and lending and cloud sourcing services. With the introduction of mobile transactions and the bitcoin, consumers have more avenues for check out, which leads to the growth of FinTech. More companies in the world, including Dell, DISH Network, Microsoft, and Overstock have added bitcoin as one of the accepted methods of checking out. The main ingredient to the tremendous growth of FinTech is the promise of better, efficient services. The world is only asking for better as opposed to what has been the norm.

From the analysis, most of the investment is seen to be coming from the US, followed by Europe and Asia in that order. In 2013, the US invested a total $ 4.05 Billion in FinTech. By the end of 2014, the investment had grown to $9.89Billion and the projections for 2015, by the end of Q2 were $12Billion. Europe and Asia are not to be left behind and by the end of Q3 of 2015; the projected growth in investments will be $4.4 billion and $3.5 billion respectively.

The growth of FinTech is particularly successful in the parts of the world where the right infrastructure is in place. Currently, the US leads the pack as the country with the most conducive atmosphere for the growth of FinTech start-ups, with good examples such as Silicon Valley, Boston, New York, and Los Angeles. Tel Aviv, in Israel and London, are two of the other cities in which the start-ups are promising.

The promise of FinTech is yet to die down. These five start-ups; Stocktwits, Motif Investing, Robin Hood, Moven, and Acorns are trending and promising to make heads roll in the banking and investing industries.

The future is still unclear as far as FinTech is concerned. The one thing that is clear is that the public will be at liberty to make decisions that affect them more quickly, owing to the availability of information that this advanced technology will bring.

A new infographic from Savvy Beaver and startup Call Levels takes a closer look at the current financial technology landscape.

 

Original article here.

 


standard

Google has made a big shift in its plan to give everybody faster internet: from wired to wireless

2016-08-16 - By 

People loved the idea of Google Fiber when it was first announced in 2010. Superfast internet that’s 100 times faster than the norm — and it’s cheap? It sounded too good to be true.

But maybe that initial plan was a little too ambitious.

Over the last several years, Google has worked with dozens of cities and communities to build fiber optic infrastructure that can deliver gigabit speeds to homes and neighborhoods — this would let you stream videos instantly or download entire movies in seconds.

But right now, introducing Google Fiber in any town is a lengthy, expensive process. Google first needs to work with city leaders to lay the groundwork for construction, and then it needs to lay cables underground, along telephone lines, and in houses and buildings.

This all takes time and money: Google has spent hundreds of millions of dollars on these projects, according to The Wall Street Journal, and the service is available in just six metro areas, an average of one per year.

Given these barriers, Google Fiber is reportedly working on a way to make installation quicker, cheaper, and more feasible. According to a new filing with the Federal Communications Commission earlier this month, Google has been testing a new wireless-transmission technology that “relies on newly available spectrum” to roll out Fiber much more quickly.

“The project is in early stages today, but we hope this technology can one day help deliver more abundant internet access to consumers,” a Google spokesperson told Business Insider.

And, according to The Journal, Google is looking to use this wireless technology in “about a dozen new metro areas, including Los Angeles, Chicago and Dallas.”

Right now, Google Fiber customers can pay $70 a month for 1-gigabit-per-second speeds and an extra $60 a month for the company’s TV service. It’s unclear if this wireless technology would change the pricing, but at the very least, it ought to help accelerate Fiber’s expansion and cut down on installation costs.

One of the company’s recent acquisitions could help this transition. In June, Google Fiber bought Webpass, a company that knows how to wirelessly transmit internet service from fiber-connected antennas to antennas mounted on buildings. It’s a concept that’s pretty similar to Starry, another ambitious company that wowed us earlier this year with its plan for a superfast, inexpensive internet service.

Original article here.


standard

A New Era for Application Building, Management and User Access

2016-08-15 - By 

Every decade, a set of major forces work together to change the way we think about “applications.” Until now, those changes were principally evolutions of software programming, networked communications and user interactions.

In the mid-1990s, Bill Gates’ famous “The Internet Tidal Wave” letter highlighted the rise of the internet, browser-based applications and portable computing.

By 2006, smart, touch devices, Software-as-a-Service (SaaS) and the earliest days of cloud computing were emerging. Today, data and machine learning/artificial intelligence are combining with software and cloud infrastructure to become a new platform.

Microsoft CEO Satya Nadella recently described this new platform as “a third ‘run time’ — the next platform…one that doesn’t just manage information but also learns from information and interacts with the physical world.”

I think of this as an evolution from software to dataware as applications transform from predictable programs to data-trained systems that continuously learn and make predictions that become more effective over time. Three forces — application intelligence, microservices/serverless architectures and natural user interfaces — will dominate how we interact with and benefit from intelligent applications over the next decade.

In the mid-1990s, the rise of internet applications offered countless new services to consumers, including search, news and e-commerce. Businesses and individuals had a new way to broadcast or market themselves to others via websites. Application servers from BEA, IBM, Sun and others provided the foundation for internet-based applications, and browsers connected users with apps and content. As consumer hardware shifted from desktop PCs to portable laptops, and infrastructure became increasingly networked, the fundamental architectures of applications were re-thought.

By 2006, a new wave of core forces shaped the definition of applications. Software was moving from client-server to Software-as-a-Service. Companies like Salesforce.com and NetSuite led the way, with others like Concur transforming into SaaS leaders. In addition, hardware started to become software services in the form of Infrastructure-as-a-Service with the launch of Amazon Web Services S3 (Simple Storage Service) and then EC2 (Elastic Cloud Compute Service).

Smart, mobile devices began to emerge, and applications for these devices quickly followed. Apple entered the market with the iPhone in 2007, and a year later introduced the App Store. In addition, Google launched the Android ecosystem that year. Applications were purpose-built to run on these smart devices, and legacy applications were re-purposed to work in a mobile context.

As devices, including iPads, Kindles, Surfaces and others proliferated, application user interfaces became increasingly complex. Soon developers were creating applications that responsively adjusted to the type of device and use case they were supporting. Another major change of this past decade was the transition from typing and clicking, which had dominated the PC and Blackberry era, to touch as a dominant interface for humans and applications.

In 2016, we are on the cusp of a totally new era in how applications are built, managed and accessed by users. The most important aspect of this evolution is how applications are being redefined from “software programs” to “dataware learners.”

For decades, software has been ­programmed and designed to run in predictable ways. Over the next decade, dataware will be created through training a computer system with data that enables the system to continuously learn and make predictions based on new data/metadata, engineered features and algorithm-powered data models.

In short, software is programmed and predictable, while the new dataware is trained and predictive. We benefit from dataware all the time today in modern search, consumer services like Netflix and Spotify and fraud protection for our credit cards. But soon, every application will be an intelligent application.

Three major forces underlie the shift from software to dataware which necessitates a new “platform” for application development and operations and these forces are interrelated.

Application intelligence

Intelligent applications are the end product of this evolution. They leverage data, algorithms and ongoing learning to anticipate and improve interactions with the people and machines they interact with.

 

They combine three layers: innovative data and metadata stores, data intelligence systems (enabled by machine learning/AI) and the predictive intelligence that is expressed at an “application” layer. In addition, these layers are connected by a continual feedback loop that collects data at the points of interaction between machines and/or humans to continually improve the quality of the intelligent applications.

Microservices and serverless functions

Monolithic applications, even SaaS applications, are being deconstructed into components that are elastic building blocks for “macro-services.” Microservice building blocks can be simple or multi-dimensional, and they are expressed through Application Programming Interfaces (APIs). These APIs often communicate machine-to-machine, such as Twilio for communication or Microsoft’s Active Directory Service for identity. They also enable traditional applications to more easily “talk” or interact with new applications.

And, in the form of “bots,” they perform specific functions, like calling a car service or ordering a pizza via an underlying communication platform. A closely related and profound infrastructure trend is the emergence of event-driven, “serverless” application architectures. Serverless functions such as Amazon’s Lambda service or Google Functions leverage cloud infrastructure and containerized systems such as Docker.

At one level, these “serverless functions” are a form of microservice. But, they are separate, as they rely on data-driven events to trigger a “state-less” function to perform a specific task. These functions can even call intelligent applications or bots as part of a functional flow. These tasks can be connected and scaled to form real-time, intelligent applications and be delivered in a personalized way to end-users. Microservices, in their varying forms, will dominate how applications are built and “served” over the next decade.

Natural user interface

If touch was the last major evolution in interfaces, voice, vision and virtual interaction using a mix of our natural senses will be the major interfaces of the next decade. Voice is finally exploding with platforms like Alexa, Cortana and Siri. Amazon Alexa already has more than 1,000 voice-activated skills on its platform. And, as virtual and augmented reality continue to progress, voice and visual interfaces (looking at an object to direct an action) will dominate how people interact with applications.

Microsoft HoloLens and Samsung Gear are early examples of devices using visual interfaces. Even touch will evolve in both the physical sense through “chatbots” and the virtual sense, as we use hand controllers like those that come with a Valve/HTC Vive to interact with both our physical and virtual worlds. And especially in virtual environments, using a voice-activated service like Alexa to open and edit a document will feel natural.

What are the high-level implications of the evolution to intelligent applications powered by a dataware platform?

SaaS is not enough. The past 10 years in commercial software have been dominated by a shift to cloud-based, always-on SaaS applications. But, these applications are built in a monolithic (not microservices) manner and are generally programmed, versus trained. New commercial applications will emerge that will incorporate the intelligent applications framework, and usually be built on a microservices platform. Even those now “legacy” SaaS applications will try to modernize by building in data intelligence and microservices components.

Data access and usage rights are required. Intelligent applications are powered by data, metadata and intelligent data models (“learners”). Without access to the data and the right to use it to train models, dataware will not be possible. The best sources of data will be proprietary and differentiated. Companies that curate such data sources and build frequently used, intelligent applications will create a virtuous cycle and a sustainable competitive advantage. There will also be a lot of work and opportunity ahead in creating systems to ingest, clean, normalize and create intelligent data learners leveraging machine learning techniques.

New form factors will emerge. Natural user interfaces leveraging speech and vision are just beginning to influence new form factors like Amazon Echo, Microsoft HoloLens and Valve/HTC Vive. These multi-sense and machine-learning-powered form factors will continue to evolve over the next several years. Interestingly, the three mentioned above emerged from a mix of Seattle-based companies with roots in software, e-commerce and gaming!

The three major trends outlined here will help turn software applications into dataware learners over the next decade, and will shape the future of how man and machine interact. Intelligent applications will be data-driven, highly componentized, accessed via almost all of our senses and delivered in real time.

These applications and the devices used to interact with them, which may seem improbable to some today, will feel natural and inevitable to all by 2026 — if not sooner. Entrepreneurs and companies looking to build valuable services and software today need to keep these rapidly emerging trends in mind.

I remember debating with our portfolio companies in 2006 and 2007 whether or not to build products as SaaS and mobile-first on a cloud infrastructure. That ship has sailed. Today we encourage them to build applications powered by machine learning, microservices and voice/visual inputs.

Original article here.


standard

IDC: Global Public Cloud Services Will Reach $195 Billion By 2020

2016-08-14 - By 

While SaaS is by far the leading slice of the public cloud services pie, IT professionals should keep an eye PaaS and IaaS as well, because these are the fast-rising segments of the cloud market, according to IDC.

Between now and 2020, worldwide spending on public cloud services is expected to soar to more than $195 billion, essentially doubling the revenue the industry is expected to generate by the end of this year, according to a new report from IDC.

The report, which the research firm released Aug. 10, is an update to IDC’s “Worldwide Semiannual Public Cloud Services Spending Guide,” originally published in January. The updated numbers show that the global compound annual growth rate of public cloud services spending will climb 20.4% from 2015 to 2020.

For 2016, public cloud revenue is expected to reach $96.5 billion.

In Wednesday’s report, Benjamin McGrath, IDC’s senior research analyst of SaaS and business models, noted:

Cloud software will significantly outpace traditional software product delivery over the next five years, growing nearly three times faster than the software market as a whole and becoming the significant growth driver to all functional software markets. By 2020, about half of all new business software purchases will be of service-enabled software, and cloud software will constitute more than a quarter of all software sold.

[Learn what cloud skills you need to help your career.]

With such rapid growth in public cloud services expected in the coming years, here are some key points that IT professionals may want to monitor based on IDC’s findings:

  • While software-as-a-service (SaaS) is expected to account for more than two-thirds of the public cloud spending between 2015 to 2019 — according to the initial January report — infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) will each capture a faster growth rate of 27% and 30.6%, respectively.
  • The updated report published Wednesday reconfirms that IaaS and PaaS revenues will continue to be the growth leaders in cloud services. IT professionals, as a result, may want to begin exploring and evaluating potential IaaS and PaaS partners that would work well with the company’s SaaS vendor. Administrators could also ask their SaaS vendor whether the company offers such a capability, or, as in the case with Salesforce.com, partner with another vendor for such services.
  • In addition, PaaS is an area worth noting as it increasingly becomes the focus for many companies’ DevOps approach toward rapid business and mobile application development.
  • The US will capture nearly two-thirds of all public cloud services revenues over the next five years. Western Europe and the Asia-Pacific region — excluding Japan — will follow behind. It is this Asia-Pacific region and Latin America that are poised to grow the fastest over the next five years.
  • Banking, discrete manufacturing, and professional services accounted for nearly a third of the spending on global public cloud services in 2016. However, the fastest growth areas leading up to 2020 will be media, retail, and telecommunications.

Eileen Smith, IDC’s program director of customer insights and analysis, noted in the report that cloud computing and the flexibility that technology offers is now expanding well beyond traditional enterprises to all businesses:

Cloud computing is breaking down traditional technology barriers as line of business leaders and their IT organizations rely on cloud to flexibly deliver IT resources at the lower cost and faster speed that businesses require. Organizations across all industries are now free to adapt to market changes quicker and take more risks, as they are no longer bound by legacy IT constraints.

Original article here.


standard

Is Anything Ever ‘Forgotten’ Online?

2016-08-13 - By 

When someone types your name into Google, suppose the first link points to a newspaper article about you going bankrupt 15 years ago, or to a YouTube video of you smoking cigarettes 20 years ago, or simply a webpage that includes personal information such as your current home address, your birth date, or your Social Security number. What can you do — besides cry?

Unlike those living the United States, Europeans actually have some recourse. The European Union’s “right to be forgotten” (RTBF) law allows EU residents to fill out an online form requesting that a search engine (such as Google) remove links that compromise their privacy or unjustly damage their reputation. A committee at the search company, primarily consisting of lawyers, will review your request, and then, if deemed appropriate, the site will no longer display those unwanted links when people search for your name.

But privacy efforts can backfire. A landmark example of this happened in 2003, when actress and singerBarbra Streisand sued a California couple who took aerial photographs of the entire length of the state’s coastline, which included Streisand’s Malibu estate. Streisand’s suit argued that her privacy had been violated, and tried to get the photos removed from the couple’s website so nobody could see them. But the lawsuit itself drew worldwide media attention; far more people saw the images of her home than would have through the couple’s online archive.

In today’s digital world, privacy is a regular topic of concern and controversy. If someone discovered the list of all the things people had asked to be “forgotten,” they could shine a spotlight on that sensitive information. Our research explored whether that was possible, and how it might happen. Our research has shown that hidden news articles can be unmasked with some hacking savvy and a moderate amount of financial resources.

Keeping the past in the past

The RTBF law does not require websites to take down the actual web pages containing the unwanted information. Rather, just the search engine links to those pages are removed, and only from results from searches for specific terms.

In most circumstances, this is perfectly fine. If you shoplifted 20 years ago, and people you have met recently do not suspect you shoplifted, it is very unlikely they would discover — without the aid of a search engine — that you ever shoplifted by simply browsing online content. By removing the link from Google’s results for searches of your name, your brief foray into shoplifting would be, for all intents and purposes, “forgotten.”

This seems like a practical solution to a real problem that many people are facing today. Google has received requests to remove more than 1.5 million links from specific search results and has removed 43 percent of them.

‘Hiding’ in plain sight

But our recent research has shown that a transparency activist or private investigator, with modest hacking skills and financial resources, can find newspaper articles that have been removed from search results and identify the people who requested those removals. This data-driven attack has three steps.

First, the searcher targets a particular online newspaper, such as the Spanish newspaper El Mundo, and uses automated software tools to download articles that may be subject to delisting (such as articles about financial or sexual misconduct). Second, he again uses automated tools to get his computer to extract the names mentioned in the downloaded articles. Third, he runs a program to query google.es with each of those names, to see if the corresponding article is in the google.es search results or not. If not, then it is most certainly a RTBF delisted link, and the corresponding name is the person who requested the delisting.

As a proof of concept, we did exactly this for a subset of articles from El Mundo, a Madrid-based daily newspaper we chose in part because one of our team speaks Spanish. From the subset of downloaded articles, we discovered two that are being delisted by google.es, along with the names of the corresponding requesters.

Using a third-party botnet to send the queries to Google from many different locations, and with moderate financial resources ($5,000 to $10,000), we believe the effort could cover all candidate articles in all major European newspapers. We estimate that 30 to 40 percent of the RTBF delisted links in the media, along with their corresponding requesters, could be discovered in this manner.

Lifting the veil

Armed with this information, the person could publish the requesters’ names and the corresponding links on a new website, naming those who have things they want forgotten and what it is they hope people won’t remember. Anyone seeking to find information on a new friend or business associate could visit this site — in addition to Google — and find out what, if anything, that person is trying to bury in the past. One such site already exists.

At present, European law only requires the links to be removed from country- or language-specific sites, such as google.fr and google.es. Visitors to google.com can still see everything. This is the source of a major European debate about whether the right to be forgotten should also require Google to remove links from searches on google.com. But because our approach does not involve using google.com, it would still work even if the laws were extended to cover google.com.

Should the right to be forgotten exist?

Even if delisted links to news stories can be discovered, and the identities of their requesters revealed, the RTBF law still serves a useful and important purpose for protecting personal privacy.

By some estimates, 95 percent of RTBF requests are not seeking to delist information that was in the news. Rather, people want to protect personal details such as their home address or sexual orientation, and even photos and videos that might compromise their privacy. These personal details typically appear in social media like Facebook or YouTube, or in profiling sites, such as profileengine.com. But finding these delisted links for social media is much more difficult because of the huge number of potentially relevant web pages to be investigated.

People should have the right to retain their privacy — particularly when it comes to things like home addresses or sexual orientation. But you may just have to accept that the world might not actually forget about the time when as a teenager when your friend challenged you to shoplift.

Original article here.


standard

Open-source CognizeR lets data scientists easily access IBM’s Watson tools.

2016-08-12 - By 

Data scientists using the programming language R will soon find it much easier to leverage the tools and services offered by IBM Watson. On Thursday, IBM Watson and Columbus Collaboratory partnered on the release of CognizeR, an open-source R extension that makes it easier to use Watson from a native development environment.

Typically, Watson services like Watson Language and Translation of Visual Recognition are leveraged via API calls that have to be coded in Java or Python. Now, using the CognizeR extension, data scientists can tap into Watson directly from their R environment.

“CognizeR offers easier access to a variety of Watson’s artificial intelligence services that can enhance the performance of predictive models developed in R, an open-source language widely used by data scientists for statistical and analytics applications,” a press release said.

Watson, of course, is IBM’s cognitive computing arm, and Columbus Collaboratory is a team of companies that work together on analytics and cybersecurity solutions. According to the press release, a big reason for the development of CognizeR was to help with the adoption of cognitive computing in the founding companies of Columbus Collaboratory, which include Battelle, CardinalHealth, Nationwide, American Electric Power, OhioHealth, Huntington, and Lbrands.

In the press release announcing the extension, IBM cited IDC figures that said that less than 1% of all data, especially unstructured data like chats, emails, social media, and images, is properly analyzed. The team is hoping that CognizeR can help improve predictive models and analyze more data.

Currently, data scientists will be able to use Watson Language Translation, Personality Insights, Tone Analyzer, Speech to Text, Text to Speech, and Visual Recognition through the CognizeR. However, more services will be offered based on responses.

“As we collect feedback, we’ll be able to continually improve the experience by adding the cognitive services that data scientists want and need the most,” said Shivakumar Vaithyanathan, IBM Fellow and Director, Watson Content Services.

R is one of the most widely used languages in statististics and big data today. By making Watson more readily available to data scientists using the languages, IBM is position Watson more strongly among its enterprise audience.

CognizeR can be downloaded on GitHub.

Original Source
standard

$1 Trillion in IT Spending Moving to Cloud. How Much is on Waste? [Infographic]

2016-08-11 - By 

Gartner recently reported that by 2020, the “cloud shift” will affect more than $1 trillion in IT spending.

The shift comes from the confluence of IT spending on enterprise software, data center systems, and IT services all moving to the cloud.

With this enormous shift and change of practices comes a financial risk that is very real: your organization may be spending money on services you are not actually using. In other words, wasting money.

How big is the waste problem, exactly?

The 2016 Cloud Market

While Gartner’s $1 trillion number refers to the next 5 years, let’s take a step back and look just at the size of the market in 2016, where we can more easily predict spending habits.

The size of the 2016 cloud market, from that same Gartner study, is about $734 billion. Of that, $203.9 billion is spent on public cloud.

Public cloud spend is spread across a variety of application services, management and security services, and more (BPaaS, SaaS, PaaS, etc.) – all of which have their own sources of waste. In this post, let’s focus on the portion for which wasted spend is easiest to quantify: cloud infrastructure services (IaaS).

Breaking down IaaS Spending

Within the $22.4 billion spent on IaaS, about 2/3 of spending is on computer resources (rather than database or storage). From a recent survey we held – bolstered by our daily conversations with cloud users – we learned that about half of these compute resources are used for non-production purposes: that is, development, staging, testing, QA, and other behind-the-scenes work. The majority of servers used for these functions do not need to run 24 hours a day, 7 days a week. In fact, they’re generally only needed for a 40-hour workweek at most (even this assumes maximum efficiency with developers accessing these servers during their entire workdays).

Since most compute infrastructure is sold by the hour, that means that for the other 128 hours of the week, you’re paying for time you’re not using. Ouch.

All You Need to Do is Turn Out the Lights

A huge portion of IT spending could be eliminated simply by “turning out the lights” – that is, by stopping hourly servers when they are not needed, so you only pay for the hours you’re actually using. Luckily, this does not have to be a manual process. You can automatically schedule off times for your servers, to ensure they’re always off when you don’t need them (and to save you time!)

Original article here.


standard

Magic Quadrant for Cloud Infrastructure as a Service, Worldwide

2016-08-10 - By 

Summary

The market for cloud IaaS has consolidated significantly around two leading service providers. The future of other service providers is increasingly uncertain and customers must carefully manage provider-related risks.

Market Definition/Description

Cloud computing is a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using internet technologies. Cloud infrastructure as a service (IaaS) is a type of cloud computing service; it parallels the infrastructure and data center initiatives of IT. Cloud compute IaaS constitutes the largest segment of this market (the broader IaaS market also includes cloud storage and cloud printing). Only cloud compute IaaS is evaluated in this Magic Quadrant; it does not cover cloud storage providers, platform as a service (PaaS) providers, SaaS providers, cloud service brokerages (CSBs) or any other type of cloud service provider, nor does it cover the hardware and software vendors that may be used to build cloud infrastructure. Furthermore, this Magic Quadrant is not an evaluation of the broad, generalized cloud computing strategies of the companies profiled.

In the context of this Magic Quadrant, cloud compute IaaS (hereafter referred to simply as “cloud IaaS” or “IaaS”) is defined as a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are owned by a service provider and offered to the customer on demand. The resources are scalable and elastic in near real time, and metered by use. Self-service interfaces are exposed directly to the customer, including a web-based UI and an API. The resources may be single-tenant or multitenant, and hosted by the service provider or on-premises in the customer’s data center. Thus, this Magic Quadrant covers both public and private cloud IaaS offerings.

Cloud IaaS includes not just the resources themselves, but also the automated management of those resources, management tools delivered as services, and cloud software infrastructure services. The last category includes middleware and databases as a service, up to and including PaaS capabilities. However, it does not include full stand-alone PaaS capabilities, such as application PaaS (aPaaS) and integration PaaS (iPaaS).

We draw a distinction between cloud infrastructure as a service , and cloud infrastructure as an enabling technology ; we call the latter “cloud-enabled system infrastructure” (CESI). In cloud IaaS, the capabilities of a CESI are directly exposed to the customer through self-service. However, other services, including noncloud services, may be delivered on top of a CESI; these cloud-enabled services may include forms of managed hosting, data center outsourcing and other IT outsourcing services. In this Magic Quadrant, we evaluate only cloud IaaS offerings; we do not evaluate cloud-enabled services.

Gartner’s clients are mainly enterprises, midmarket businesses and technology companies of all sizes, and the evaluation focuses on typical client requirements. This Magic Quadrant covers all the common use cases for cloud IaaS, including development and testing, production environments (including those supporting mission-critical workloads) for both internal and customer-facing applications, batch computing (including high-performance computing [HPC]) and disaster recovery. It encompasses both single-application workloads and virtual data centers (VDCs) hosting many diverse workloads. It includes suitability for a wide range of application design patterns, including both cloud-native application architectures and enterprise application architectures.

Customers typically exhibit a bimodal IT sourcing pattern for cloud IaaS (see “Bimodal IT: How to Be Digitally Agile Without Making a Mess” and “Best Practices for Planning a Cloud Infrastructure-as-a-Service Strategy — Bimodal IT, Not Hybrid Infrastructure” ). Most cloud IaaS is bought for Mode 2 agile IT, emphasizing developer productivity and business agility, but an increasing amount of cloud IaaS is being bought for Mode 1 traditional IT, with an emphasis on cost reduction, safety and security. Infrastructure and operations (I&O) leaders typically lead the sourcing for Mode 1 cloud needs. By contrast, sourcing for Mode 2 offerings is typically driven by enterprise architects, application development leaders and digital business leaders. This Magic Quadrant considers both sourcing patterns and their associated customer behaviors and requirements.

This Magic Quadrant strongly emphasizes self-service and automation in a standardized environment. It focuses on the needs of customers whose primary need is self-service cloud IaaS, although this may be supplemented by a small amount of colocation or dedicated servers. In self-service cloud IaaS, the customer retains most of the responsibility for IT operations (even if the customer subsequently chooses to outsource that responsibility via third-party managed services).

Organizations that need significant customization or managed services for a single application, or that are seeking cloud IaaS as a supplement to a traditional hosting solution (“hybrid hosting”), should consult the Magic Quadrants for managed hosting instead ( “Magic Quadrant for Cloud-Enabled Managed Hosting, North America,” “Magic Quadrant for Managed Hybrid Cloud Hosting, Europe” and “Magic Quadrant for Cloud-Enabled Managed Hosting, Asia/Pacific” ). Organizations that want a fully custom-built solution, or managed services with an underlying CESI, should consult the Magic Quadrants for data center outsourcing and infrastructure utility services ( “Magic Quadrant for Data Center Outsourcing and Infrastructure Utility Services, North America,” “Magic Quadrant for Data Center Outsourcing and Infrastructure Utility Services, Europe” and “Magic Quadrant for Data Center Outsourcing and Infrastructure Utility Services, Asia/Pacific” ).

This Magic Quadrant evaluates all industrialized cloud IaaS solutions, whether public cloud (multitenant or mixed-tenancy), community cloud (multitenant but limited to a particular customer community), or private cloud (fully single-tenant, hosted by the provider or on-premises). It is not merely a Magic Quadrant for public cloud IaaS. To be considered industrialized, a service must be standardized across the customer base. Although most of the providers in this Magic Quadrant do offer custom private cloud IaaS, we have not considered these nonindustrialized offerings in our evaluations. Organizations that are looking for custom-built, custom-managed private clouds should use our Magic Quadrants for data center outsourcing and infrastructure utility services instead (see above).

Understanding the Vendor Profiles, Strengths and Cautions

Cloud IaaS providers that target enterprise and midmarket customers generally offer a high-quality service, with excellent availability, good performance, high security and good customer support. Exceptions will be noted in this Magic Quadrant’s evaluations of individual providers. Note that when we say “all providers,” we specifically mean “all the evaluated providers included in this Magic Quadrant,” not all cloud IaaS providers in general. Keep the following in mind when reading the vendor profiles:

  • All the providers have a public cloud IaaS offering. Many also have an industrialized private cloud offering, where every customer is on standardized infrastructure and cloud management tools, although this may or may not resemble the provider’s public cloud service in either architecture or quality. A single architecture and feature set and cross-cloud management, for both public and private cloud IaaS, make it easier for customers to combine and migrate across service models as their needs dictate, and enable the provider to use its engineering investments more effectively. Most of the providers also offer custom private clouds.

  • Most of the providers have offerings that can serve the needs of midmarket businesses and enterprises, as well as other companies that use technology at scale. A few of the providers primarily target individual developers, small businesses and startups, and lack the features needed by larger organizations, although that does not mean that their customer base is exclusively small businesses.

  • Most of the providers are oriented toward the needs of Mode 1 traditional IT, especially IT operations organizations, with an emphasis on control, governance and security; many such providers have a “rented virtualization” orientation, and are capable of running both new and legacy applications, but are unlikely to provide transformational benefits. A much smaller number of providers are oriented toward the needs of Mode 2 agile IT; these providers typically emphasize capabilities for new applications and a DevOps orientation, but are also capable of running legacy applications and being managed in a traditional fashion.

  • All the providers offer basic cloud IaaS — compute, storage and networking resources as a service. A few of the providers offer additional value-added capabilities as well, notably cloud software infrastructure services — typically middleware and databases as a service — up to and including PaaS capabilities. These services, along with IT operations management (ITOM) capabilities as a service (especially DevOps-related services) are a vital differentiator in the market, especially for Mode 2 agile IT buyers.

  • We consider an offering to be public cloud IaaS if the storage and network elements are shared; the compute can be multitenant, single-tenant or both. Private cloud IaaS uses single-tenant compute and storage, but unless the solution is on the customer’s premises, the network is usually still shared.

  • In general, monthly compute availability SLAs of 99.95% and higher are the norm, and they are typically higher than availability SLAs for managed hosting. Service credits for outages in a given month are typically capped at 100% of the monthly bill. This availability percentage is typically non-negotiable, as it is based on an engineering estimate of the underlying infrastructure reliability. Maintenance windows are normally excluded from the SLA.

  • Some providers have a compute availability SLA that requires the customer to use compute capabilities in at least two fault domains (sometimes known as “availability zones” or “availability sets”); an SLA violation requires both fault domains to fail. Providers with an SLA of this type are explicitly noted as having a multi-fault-domain SLA.

  • Very few of the providers have an SLA for compute or storage performance. However, most of the providers do not oversubscribe compute or RAM resources; providers that do not guarantee resource allocations are noted explicitly.

  • Many providers have additional SLAs covering network availability and performance, customer service responsiveness and other service aspects.

  • Infrastructure resources are not normally automatically replicated into multiple data centers, unless otherwise noted; customers are responsible for their own business continuity. Some providers offer optional disaster recovery solutions.

  • All providers offer, at minimum, per-hour metering of virtual machines (VMs), and some can offer shorter metering increments, which can be more cost-effective for short-term batch jobs. Providers charge on a per-VM basis, unless otherwise noted. Some providers offer either a shared resource pool (SRP) pricing model or are flexible about how they price the service. In the SRP model, customers contract for a certain amount of capacity (in terms of CPU and RAM), but can allocate that capacity to VMs in an arbitrary way, including being able to oversubscribe that capacity voluntarily; additional capacity can usually be purchased on demand by the hour.

  • Some of the providers are able to offer bare-metal physical servers on a dynamic basis. Due to the longer provisioning times involved for physical equipment (two hours is common), the minimum billing increment for such servers is usually daily, rather than hourly. Providers with a bare-metal option are noted as such.

  • All the providers offer an option for colocation, unless otherwise noted. Many customers have needs that require a small amount of supplemental colocation in conjunction with their cloud — most frequently for a large-scale database, but sometimes for specialized network equipment, software that cannot be licensed on virtualized servers, or legacy equipment. Colocation is specifically mentioned only when a service provider actively sells colocation as a stand-alone service; a significant number of midmarket customers plan to move into colocation and then gradually migrate into that provider’s IaaS offering. If a provider does not offer colocation itself but can meet such needs via a partner exchange, this is explicitly noted.

  • All the providers claim to have high security standards. The extent of the security controls provided to customers varies significantly, though. All the providers evaluated can offer solutions that will meet common regulatory compliance needs, unless otherwise noted. All the providers have SSAE 16 audits for their data centers (see Note 1). Some may have security-specific third-party assessments such as ISO 27001 or SOC 2 for their cloud IaaS offerings (see Note 2), both of which provide a relatively high level of assurance that the providers are adhering to generally accepted practices for the security of their systems, but do not address the extent of controls offered to customers. Security is a shared responsibility; customers need to correctly configure controls and may need to supply additional controls beyond what their provider offers.

  • Some providers offer a software marketplace where software vendors specially license and package their software to run on that provider’s cloud IaaS offering. Marketplace software can be automatically installed with a click, and can be billed through the provider. Some marketplaces also contain other third-party solutions and services.

  • All providers offer enterprise-class support with 24/7 customer service, via phone, email and chat, along with an account manager. Most providers include this with their offering. Some offer a lower level of support by default, but allow customers to pay extra for enterprise-class support.

  • All the providers will sign contracts with customers can invoice, and can consolidate bills from multiple accounts. While some may also offer online sign-up and credit card billing, they recognize that enterprise buyers prefer contracts and invoices. Some will sign “zero dollar” contracts that do not commit a customer to a certain volume.

  • Many of the providers have white-label or reseller programs, and some may be willing to license their software. We mention software licensing only when it is a significant portion of the provider’s business; other service providers, not enterprises, are usually the licensees. We do not mention channel programs; potential partners should simply assume that all these companies are open to discussing a relationship.

  • Most of the providers offer optional managed services on IaaS. However, not all offer the same type of managed services on IaaS as they do in their broader managed hosting or data center outsourcing services. Some may have managed service provider (MSP) or system integrator (SI) partners that provide managed and professional services.

  • All the evaluated providers offer a portal, documentation, technical support, customer support and contracts in English. Some can provide one or more of these in languages other than English. Most providers can conduct business in local languages, even if all aspects of service are English-only. Customers who need multilingual support will find it very challenging to source an offering.

  • All the providers are part of very large corporations or otherwise have a well-established business. However, many of the providers are undergoing significant re-evaluation of their cloud IaaS businesses. Existing and prospective customers should be aware that such providers may make significant changes to the strategy and direction of their cloud IaaS business, including replacing their current offering with a new platform, or exiting this business entirely in favor of partnering with a more successful provider.

In previous years, this Magic Quadrant has provided significant technical detail on the offerings. These detailed evaluations are now published in “Critical Capabilities for Public Cloud Infrastructure as a Service, Worldwide” instead.

The service provider descriptions are accurate as of the time of publication. Our technical evaluation of service features took place between January 2016 and April 2016.

Format of the Vendor Descriptions

When describing each provider, we first summarize the nature of the company and then provide information about its industrialized cloud IaaS offerings in the following format:

Offerings: A list of the industrialized cloud IaaS offerings (both public and private) that are directly offered by the provider. Also included is commentary on the ways in which these offerings deviate from the standard capabilities detailed in the Understanding the Vendor Profiles, Strengths and Cautions section above. We also list related capabilities of interest, such as object storage, content delivery network (CDN) and managed services, but this is not a comprehensive listing of the provider’s offerings.

Locations: Cloud IaaS data center locations by country, languages that the company does business in, and languages that technical support can be conducted in.

Recommended mode: We note whether the vendor’s offerings are likely to appeal to Mode 1 safety-and-efficiency-oriented IT, Mode 2 agility-oriented IT, or both. We also note whether the offerings are likely to be useful for organizations seeking IT transformation. This recommendation reflects the way that a provider goes to market, provides service and support, and designs its offerings. All such statements are specific to the provider’s cloud IaaS offering, not the provider as a whole.

Recommended uses: These are the circumstances under which we recommend the provider. These are not the only circumstances in which it may be a useful provider, but these are the use cases it is best used for. For a more detailed explanation of the use cases, see the Recommended Uses section below.

In the list of offerings, we state the basis of each provider’s virtualization technology and, if relevant, its cloud management platform (CMP). We also state what APIs it supports — the Amazon Web Services (AWS), OpenStack and vCloud APIs are the three that have broad adoption, but many providers also have their own unique API. Note that supporting one of the three common APIs does not provide assurance that a provider’s service is compatible with a specific tool that purports to support that API; the completeness and accuracy of API implementations vary considerably. Furthermore, the use of the same underlying CMP or API compatibility does not indicate that two services are interoperable. Specifically, OpenStack-based clouds differ significantly from one another, limiting portability; the marketing hype of “no vendor lock-in” is, practically speaking, untrue.

For many customers, the underlying hypervisor will matter, particularly for those that intend to run commercial software on IaaS. Many independent software vendors (ISVs) support only VMware virtualization, and those vendors that support Xen may support only Citrix XenServer, not open-source Xen (which is often customized by IaaS providers and is likely to be different from the current open-source version). Similarly, some ISVs may support the Kernel-based Virtual Machine (KVM) hypervisor in the form of Red Hat Enterprise Virtualization, whereas many IaaS providers use open-source KVM.

For a detailed technical description of public cloud IaaS offerings, along with a use-case-focused technical evaluation, see“Critical Capabilities for Public Cloud Infrastructure as a Service, Worldwide.”

We also provide a detailed list of evaluation criteria in “Evaluation Criteria for Cloud Infrastructure as a Service.” We have used those criteria to perform in-depth assessments of several providers: see “In-Depth Assessment of Amazon Web Services,” “In-Depth Assessment of Google Cloud Platform,” “In-Depth Assessment of SoftLayer, an IBM Company” and “In-Depth Assessment of Microsoft Azure IaaS.”

Recommended Uses

For each vendor, we provide recommendations for use. The most typical recommended uses are:

  • Cloud-native applications. These are applications specifically architected to run in a cloud IaaS environment, using cloud-native principles and design patterns.

  • E-business hosting. These are e-marketing sites, e-commerce sites, SaaS applications, and similar modern websites and web-based applications. They are usually internet-facing. They are designed to scale out and are resilient to infrastructure failure, but they might not use cloud transaction processing principles.

  • General business applications. These are the kinds of general-purpose workloads typically found in the internal data centers of most traditional businesses; the application users are usually located within the business. Many such workloads are small, and they are often not designed to scale out. They are usually architected with the assumption that the underlying infrastructure is reliable, but they are not necessarily mission-critical. Examples include intranet sites, collaboration applications such as Microsoft SharePoint and many business process applications.

  • Enterprise applications. These are general-purpose workloads that are mission-critical, and they may be complex, performance-sensitive or contain highly sensitive data; they are typical of a modest percentage of the workloads found in the internal data centers of most traditional businesses. They are usually not designed to scale out, and the workloads may demand large VM sizes. They are architected with the assumption that the underlying infrastructure is reliable and capable of high performance.

  • Development environments. These workloads are related to the development and testing of applications. They are assumed not to require high availability or high performance. However, they are likely to require governance for teams of users.

  • Batch computing. These workloads include high-performance computing (HPC), big data analytics and other workloads that require large amounts of capacity on demand. They do not require high availability, but may require high performance.

  • Internet of Things (IoT) applications. IoT applications typically combine the traits of cloud-native applications with the traits of big data applications. They typically require high availability, flexible and scalable capacity, interaction with distributed and mobile client devices, and strong security; many such applications also have significant regulatory compliance requirements.

For all the vendors, the recommended uses are specific to self-managed cloud IaaS. However, many of the providers also have managed services, as well as other cloud and noncloud services that may be used in conjunction with cloud IaaS. These include hybrid hosting (customers sometimes blend solutions, such as an entirely self-managed front-end web tier on public cloud IaaS, with managed hosting for the application servers and database), as well as hybrid IaaS/PaaS solutions. Even though we do not evaluate managed services, PaaS and the like in this Magic Quadrant, they are part of a vendor’s overall value proposition and we mention them in the context of providing more comprehensive solution recommendations.

Magic Quadrant

 
Figure 1. Magic Quadrant for Cloud Infrastructure as a Service, Worldwide
 
 
Source: Gartner (August 2016)
 
See original article here.
 
 

standard

Are Wireless Keyboards Leaking Your Data?

2016-08-09 - By 

Wireless keyboards transmit every keystroke to your computer, via a low-power radio signal. Is it possible for a hacker to intercept that signal, to steal your passwords and other sensitive data? In some cases, yes. Should you panic? Maybe. Here’s what you need to know…

Is Your Keyboard Secure?

Tech news is pretty slow during the dog days of summer, so it’s a perfect time to grab headlines by beating dead horses. That’s what happened at the end of July, when the tech media suddenly exploded with headlines like these:

“Flaws in wireless keyboards let hackers snoop on everything you type” … “Radio Hack Steals Keystrokes from Millions of Wireless Keyboards” … “It’s Shockingly Easy to Hack Some Wireless Keyboards” … and “Hackers can pick off, inject wireless keyboard keystrokes from 8 vendors, maybe more”.

I suppose they needed to write about something besides the July 29 end of free Windows 10 upgrades, if only for a day.

The brief uproar originated from Atlanta-based Bastille Networks. Bastille specializes in “software and sensor technologies to detect and mitigate threats affecting the Internet of Things,” particularly wireless things such as keyboards, mice, security cameras, etc. Founded in March, 2014, Bastille is a startup struggling for name recognition. It found some in the flurry of FUD (fear, uncertainty, and doubt) that its latest report unleashed.

The gist of that report is that wireless keyboards from at least eight manufacturers either lack encryption entirely or implement it so badly that it does not stop hackers from injecting keystrokes into a user’s computer. Bad guys can take over your machine from a distance of up to 250 feet, Bastille claims, or record your login credentials and other sensitive information as you type it.

Nothing New Under the Sun

The thing is, this vulnerability of wireless input devices has been known for years; here is an article on the subject from 2007. Yet I have not seen a single example of any user who has been hacked via a wireless keyboard or mouse.

The eight manufacturers whose keyboards and/or mice Bastille tested include Hewlett-Packard, Anker, Kensington, RadioShack, Insignia, Toshiba, GE/Jasco and EagleTec. The exact models in which Bastille found vulnerabilities are listed here.

Only three vendors – Anker, GE, and Kensington – have responded to Bastille’s alarm about their products. All of them are dutifully grateful to Bastille for bringing this matter to their attention. Anker and Kensington also state that they have received no complaints involving the issue. Anker has withdrawn its vulnerable product from the market, and will exchange existing products for another (presumably secure) one — if the original product is still under warranty.

Only Kensington states that it has released a new product with AES encryption, the Pro-Fit Wireless Desktop Set. http://goo.gl/rY0tS7 with a $29.95 list price. That’s not bad at all for a wireless keyboard and wireless mouse combo. I have seen rip-offs on Amazon that want $249 for similar encrypted wireless keyboards alone.

In the end, Bastille has done the world a service by forcing at least one major manufacturer to implement encryption on its wireless input devices. The vulnerability will probably continue to be ignored by most other vendors, and by users who value low price over high security.

Should You Replace Your Keyboard?

There is no evidence that hackers have been exploiting this vulnerability, despite it being well known for over ten years. But then again, identity theft is rampant, and the cause cannot always be determined with certainty. I was checking into a hotel last week, and I noticed that the desk clerk was using a wireless keyboard. Hopefully it was a secure model that didn’t broadcast my home address, driver’s license and credit card number to that sketchy guy hanging out in the lobby with a laptop.

If you’re a home user with a wireless keyboard on the naughty list mentioned above, the chances that you’ll be targetted by hackers within a 250-foot radius seem pretty slim to me. But if you work in a business where you deal with sensitive customer data, you should consider swapping out your vulnerable wireless keyboards for a wired model, or get one that implements the wireless feature securely.

Do you use a wireless keyboard? Your thoughts on this topic are welcome. Post your comment or question below…

Original article here.

 


standard

Develop Android Apps Using MIT App Inventor

2016-08-08 - By 

There is a secret inventor inside each of us. Get your creative juices flowing and go ahead and develop an Android app or two. It is as easy as you think it is. Follow the detailed instructions given in this article, and you will have an Android app up and running in next to no time.

Imagine that you have come up with an idea for an app to address your own requirements, but due to lack of knowledge and information, don’t know where to begin. You could contact an Android developer, who would charge you for his services, and you would also risk having your idea being copied or stolen. You may also feel that you can’t develop the app yourself as you do not have the required programming and coding skills. But that’s not true. Let me tell you that you can develop Android apps on your own without any programming and coding. Here’s how:

An introduction to App Inventor
App Inventor is a tool that will convert your idea into a working application without the need for any prior coding or programming skills. App Inventor is the open source utility developed by Google in 2010 and, currently, it is being maintained by the Massachusetts Institute of Technology (MIT). It allows absolute beginners in computer science to develop Android applications. It provides you with a graphical user interface with all the necessary components required to build Android apps. You just need to drag and drop the components in the block editor. Each block is an independent action, which you need to arrange in a logical sequence in order to result in some action.

App Inventor features
App Inventor is a feature-rich open source Android app development tool. Here are its features.

1. Open source: Being open source, the tool is free for everyone and you don’t need to purchase anything. Open source software also gives you the freedom to customise it as per your own requirements.

2. Browser based: App Inventor runs on your browser; hence, you don’t need to install anything onto your computer. You just need to log in to your account using your email and password credentials.

3. Stored on the cloud: All your app related projects are stored on Google Cloud; therefore, you need not keep anything on your laptop or computer. Being browser based, it allows you to log in to your account from any device and all your work is in sync with the cloud database.

4. Real-time testing: App Inventor provides a standalone emulator that enables you to view the behaviour of your apps on a virtual machine. Complete your project and see it running on the emulator in real-time.

5. No coding required: As mentioned earlier, it is a tool with a graphical user interface, with all the built-in component blocks and logical blocks. You need to assemble multiple blocks together to result in some behavioural action.

6. Huge developer community: You will meet like-minded developers from across the world. You can post your queries regarding certain topics and these will be answered quickly. The community is very supportive and knowledged.

System requirements
Given below are the system requirements to run App Inventor from your computer or laptop:
1. A valid Google account, as you need to log in using your credentials.
2. A working Internet connection, as you need to log in to the cloud-based browser that’s compatible with App Inventor; hence, a working Internet connection is a must.
3. App Inventor is compatible with Google Chrome 29+, Safari 5+ and Firefox 23+.
4. An Android phone to test the final, developed application.

Beginning with App Inventor
Hope you have everything to begin your journey of Android app development with App Inventor. Follow the steps below to make your first project.
1.   Open your Google Chrome/Safari/Firefox browser and open the Google home page.
2.   Using the search box, search for App Inventor.
3.   Choose the very first link. It will redirect you to the App Inventor project’s main page. This page contains all the resources and tutorials related to App Inventor. We will explore it later. For now, click on the Create button on the top right corner.
4.   The next page will ask for your Google account credentials. Enter your user name and password that you use for your Gmail application.
5.   Click on the Sign in button, and you will successfully reach the App Inventor app development page. If the page asks you to confirm your approval of certain usage or policy terms, agree with them all. It is all safe and is mandatory if you want to move ahead.
6.   If all is done correctly, you should see a page similar to what’s shown in Figure 5.
7.   Congratulations! You have successfully set up all the necessary things and can now develop your first application.

For step-by-step instructions to develop your first App Inventor application, and to see the full original article, go here.

 


standard

Dashlane & Google in open source password manager project

2016-08-07 - By 

PC and tablet warriors who must access files and applications for work and for play tolerate their password rituals whether dozens or more times a day. Painful as entering passwords may be—forgetting some passwords, hitting the wrong keys trying to enter on others, leaving caps lock on—they offer security.

Passwords are necessary if you want to join life on the Internet. “If you are someone who spends a lot of time on the Internet, you probably have a ton of different ids at different websites, and if you follow along advice, a ton of different as well. More than likely, you are also using a , because it is difficult for us to remember all of this detail accurately all the time,” said Aamir Siddiqui in XDA Developers.

For those who do use password managers, the interesting development now is that Dashlane, a password management business, and Google have put their heads together to set up an open source API project. This is aimed at making app logins for Android users simple yet secure.

The Open YOLO project as it is called will let apps access your password manager of choice. This is an easy to remember name for the undertaking—Open YOLO (You only Login Once).

Associate Editor Steve Dent in Engadget said that “you can log into apps automatically with no typing or insecure autofill. Dashlane is spearheading the venture in cooperation with other password managers…” Jose Vilches inTechSpot spelled out how this is going to work.

“The main idea is to allow any app built using the OpenYOLO API to access passwords stored in password managers that support the standard. Presumably once you sign in to whatever compatible password manager is on your device, you’ll be automatically signed in to the apps on your device without having to input your password multiple times.”

(Steve Dent remarked how “Details are light on how it works, but we assume you’d log in once to your password manager then get access to all apps that support Open YOLO.”)

The Dashlane blog posted on August 4 by Malaika Nicholas announced the project. Actually, other “leading password managers” are also getting involved in the collaboration, she said.

The blog described this as an open API for app developers. It gives Android apps the ability to access passwords stored in the user’s favorite password manager, and logs the user into those applications.

Vilches inTechSpot said that “OpenYOLO won’t be limited to Dashlane or eventually even Android for that matter…The company also hopes to make the API available on other platforms over time.”

Meanwhile, Gabriel Avner in Geektime covered the question of what good are password managers. Do you really want them in your life?

“For those who are perhaps a bit less security conscious (read paranoid) and are unfamiliar with password managers, they provide a nifty way to generate and store strong passwords without the need to remember them by heart or write them down on slips of paper.”

Avner also said, “Understandably, not everyone is comfortable with the idea of forking over all of your passwords to a single database that if cracked could expose them to unwanted intrusions of privacy or harm. However the alternative to password managers is not really much of an option.”

Ben Schoon in 9to5Google said that Google will likely be releasing more information on this API in the near future.

Original article here.

 


standard

OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free

2016-08-07 - By 

THE FRIDAY AFTERNOON news dump, a grand tradition observed by politicians and capitalists alike, is usually supposed to hide bad news. So it was a little weird that Elon Musk, founder of electric car maker Tesla, and Sam Altman, president of famed tech incubator Y Combinator, unveiled their new artificial intelligence company at the tail end of a weeklong AI conference in Montreal this past December.

But there was a reason they revealed OpenAI at that late hour. It wasn’t that no one was looking. It was that everyonewas looking. When some of Silicon Valley’s most powerful companies caught wind of the project, they began offering tremendous amounts of money to OpenAI’s freshly assembled cadre of artificial intelligence researchers, intent on keeping these big thinkers for themselves. The last-minute offers—some made at the conference itself—were large enough to force Musk and Altman to delay the announcement of the new startup. “The amount of money was borderline crazy,” says Wojciech Zaremba, a researcher who was joining OpenAI after internships at both Google andFacebook and was among those who received big offers at the eleventh hour.

How many dollars is “borderline crazy”? Two years ago, as the market for the latest machine learning technology really started to heat up, Microsoft Research vice president Peter Lee said that the cost of a top AI researcher had eclipsed the cost of a top quarterback prospect in the National Football League—and he meant under regular circumstances, not when two of the most famous entrepreneurs in Silicon Valley were trying to poach your top talent. Zaremba says that as OpenAI was coming together, he was offered two or three times his market value.

OpenAI didn’t match those offers. But it offered something else: the chance to explore research aimed solely at the future instead of products and quarterly earnings, and to eventually share most—if not all—of this research with anyone who wants it. That’s right: Musk, Altman, and company aim to give away what may become the 21st century’s most transformative technology—and give it away for free.

Zaremba says those borderline crazy offers actually turned him off—despite his enormous respect for companies like Google and Facebook. He felt like the money was at least as much of an effort to prevent the creation of OpenAI as a play to win his services, and it pushed him even further towards the startup’s magnanimous mission. “I realized,” Zaremba says, “that OpenAI was the best place to be.”

That’s the irony at the heart of this story: even as the world’s biggest tech companies try to hold onto their researchers with the same fierceness that NFL teams try to hold onto their star quarterbacks, the researchers themselves just want to share. In the rarefied world of AI research, the brightest minds aren’t driven by—or at least not only by—the next product cycle or profit margin. They want to make AI better, and making AI better doesn’t happen when you keep your latest findings to yourself.

This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called “reinforcement learning”—one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. With this toolkit, you can build systems that simulate a new breed of robot, play Atari games, and, yes, master the game of Go.

But game-playing is just the beginning. OpenAI is a billion-dollar effort to push AI as far as it will go. In both how the company came together and what it plans to do, you can see the next great wave of innovation forming. We’re a long way from knowing whether OpenAI itself becomes the main agent for that change. But the forces that drove the creation of this rather unusual startup show that the new breed of AI will not only remake technology, but remake the way we build technology.

AI Everywhere

Silicon Valley is not exactly averse to hyperbole. It’s always wise to meet bold-sounding claims with skepticism. But in the field of AI, the change is real. Inside places like Google and Facebook, a technology called deep learning is already helping Internet services identify faces in photos, recognize commands spoken into smartphones, and respond to Internet search queries. And this same technology can drive so many other tasks of the future. It can help machinesunderstand natural language—the natural way that we humans talk and write. It can create a new breed of robot, giving automatons the power to not only perform tasks but learn them on the fly. And some believe it can eventually give machines something close to common sense—the ability to truly think like a human.

But along with such promise comes deep anxiety. Musk and Altman worry that if people can build AI that can do great things, then they can build AI that can do awful things, too. They’re not alone in their fear of robot overlords, but perhaps counterintuitively, Musk and Altman also think that the best way to battle malicious AI is not to restrict access to artificial intelligence but expand it. That’s part of what has attracted a team of young, hyper-intelligent idealists to their new project.

OpenAI began one evening last summer in a private room at Silicon Valley’s Rosewood Hotel—an upscale, urban, ranch-style hotel that sits, literally, at the center of the venture capital world along Sand Hill Road in Menlo Park, California. Elon Musk was having dinner with Ilya Sutskever, who was then working on the Google Brain, the company’s sweeping effort to build deep neural networks—artificially intelligent systems that can learn to perform tasks by analyzing massive amounts of digital data, including everything fromrecognizing photos to writing email messages to, well,carrying on a conversation. Sutskever was one of the top thinkers on the project. But even bigger ideas were in play.

Sam Altman, whose Y Combinator helped bootstrap companies like Airbnb, Dropbox, and Coinbase, had brokered the meeting, bringing together several AI researchers and a young but experienced company builder named Greg Brockman, previously the chief technology officer at high-profile Silicon Valley digital payments startup called Stripe, another Y Combinator company. It was an eclectic group. But they all shared a goal: to create a new kind of AI lab, one that would operate outside the control not only of Google, but of anyone else. “The best thing that I could imagine doing,” Brockman says, “was moving humanity closer to building real AI in a safe way.”

Musk was there because he’s an old friend of Altman’s—and because AI is crucial to the future of his various businesses and, well, the future as a whole. Tesla needs AI for its inevitable self-driving cars. SpaceX, Musk’s other company, will need it to put people in space and keep them alive once they’re there. But Musk is also one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.

The trouble was: so many of the people most qualified to solve all those problems were already working for Google (and Facebook and Microsoft and Baidu and Twitter). And no one at the dinner was quite sure that these thinkers could be lured to a new startup, even if Musk and Altman were behind it. But one key player was at least open to the idea of jumping ship. “I felt there were risks involved,” Sutskever says. “But I also felt it would be a very interesting thing to try.”

Breaking the Cycle

Emboldened by the conversation with Musk, Altman, and others at the Rosewood, Brockman soon resolved to build the lab they all envisioned. Taking on the project full-time, he approached Yoshua Bengio, a computer scientist at the University of Montreal and one of founding fathers of the deep learning movement. The field’s other two pioneers—Geoff Hinton and Yann LeCun—are now at Google and Facebook, respectively, but Bengio is committed to life in the world of academia, largely outside the aims of industry. He drew up a list of the best researchers in the field, and over the next several weeks, Brockman reached out to as many on the list as he could, along with several others.

Many of these researchers liked the idea, but they were also wary of making the leap. In an effort to break the cycle, Brockman picked the ten researchers he wanted the most and invited them to spend a Saturday getting wined, dined, and cajoled at a winery in Napa Valley. For Brockman, even the drive into Napa served as a catalyst for the project. “An underrated way to bring people together are these times where there is no way to speed up getting to where you’re going,” he says. “You have to get there, and you have to talk.” And once they reached the wine country, that vibe remained. “It was one of those days where you could tell the chemistry was there,” Brockman says. Or as Sutskever puts it: “the wine was secondary to the talk.”

By the end of the day, Brockman asked all ten researchers to join the lab, and he gave them three weeks to think about it. By the deadline, nine of them were in. And they stayed in, despite those big offers from the giants of Silicon Valley. “They did make it very compelling for me to stay, so it wasn’t an easy decision,” Sutskever says of Google, his former employer. “But in the end, I decided to go with OpenAI, partly of because of the very strong group of people and, to a very large extent, because of its mission.”

The deep learning movement began with academics. It’s only recently that companies like Google and Facebook and Microsoft have pushed into the field, as advances in raw computing power have made deep neural networks a reality, not just a theoretical possibility. People like Hinton and LeCun left academia for Google and Facebook because of the enormous resources inside these companies. But they remain intent on collaborating with other thinkers. Indeed, as LeCun explains, deep learning research requires this free flow of ideas. “When you do research in secret,” he says, “you fall behind.”

As a result, big companies now share a lot of their AI research. That’s a real change, especially for Google, which has long kept the tech at the heart of its online empiresecret. Recently, Google open sourced the software engine that drives its neural networks. But it still retains the inside track in the race to the future. Brockman, Altman, and Musk aim to push the notion of openness further still, saying they don’t want one or two large corporations controlling the future of artificial intelligence.

The Limits of Openness

All of which sounds great. But for all of OpenAI’s idealism, the researchers may find themselves facing some of the same compromises they had to make at their old jobs. Openness has its limits. And the long-term vision for AI isn’t the only interest in play. OpenAI is not a charity. Musk’s companies that could benefit greatly the startup’s work, and so could many of the companies backed by Altman’s Y Combinator. “There are certainly some competing objectives,” LeCun says. “It’s a non-profit, but then there is a very close link with Y Combinator. And people are paid as if they are working in the industry.”

According to Brockman, the lab doesn’t pay the same astronomical salaries that AI researchers are now getting at places like Google and Facebook. But he says the lab does want to “pay them well,” and it’s offering to compensate researchers with stock options, first in Y Combinator and perhaps later in SpaceX (which, unlike Tesla, is still a private company).

Nonetheless, Brockman insists that OpenAI won’t give special treatment to its sister companies. OpenAI is a research outfit, he says, not a consulting firm. But when pressed, he acknowledges that OpenAI’s idealistic vision has its limits. The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services. “Doing all your research in the open is not necessarily the best way to go. You want to nurture an idea, see where it goes, and then publish it,” Brockman says. “We will produce lot of open source code. But we will also have a lot of stuff that we are not quite ready to release.”

Both Sutskever and Brockman also add that OpenAI could go so far as to patent some of its work. “We won’t patent anything in the near term,” Brockman says. “But we’re open to changing tactics in the long term, if we find it’s the best thing for the world.” For instance, he says, OpenAI could engage in pre-emptive patenting, a tactic that seeks to prevent others from securing patents.

But to some, patents suggest a profit motive—or at least a weaker commitment to open source than OpenAI’s founders have espoused. “That’s what the patent system is about,” says Oren Etzioni, head of the Allen Institute for Artificial Intelligence. “This makes me wonder where they’re really going.”

The Super-Intelligence Problem

When Musk and Altman unveiled OpenAI, they also painted the project as a way to neutralize the threat of a malicious artificial super-intelligence. Of course, that super-intelligence could arise out of the tech OpenAI creates, but they insist that any threat would be mitigated because the technology would be usable by everyone. “We think its far more likely that many, many AIs will work to stop the occasional bad actors,” Altman says.

But not everyone in the field buys this. Nick Bostrom, the Oxford philosopher who, like Musk, has warned against the dangers of AI, points out that if you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe. “If you have a button that could do bad things to the world,” Bostrom says, “you don’t want to give it to everyone.” If, on the other hand, OpenAI decides to hold back research to keep it from the bad guys, Bostrom wonders how it’s different from a Google or a Facebook.

He does say that the not-for-profit status of OpenAI could change things—though not necessarily. The real power of the project, he says, is that it can indeed provide a check for the likes of Google and Facebook. “It can reduce the probability that super-intelligence would be monopolized,” he says. “It can remove one possible reason why some entity or group would have radically better AI than everyone else.”

But as the philosopher explains in a new paper, the primary effect of an outfit like OpenAI—an outfit intent on freely sharing its work—is that it accelerates the progress of artificial intelligence, at least in the short term. And it may speed progress in the long term as well, provided that it, for altruistic reasons, “opts for a higher level of openness than would be commercially optimal.”

“It might still be plausible that a philanthropically motivated R&D funder would speed progress more by pursuing open science,” he says.

Like Xerox PARC

In early January, Brockman’s nine AI researchers met up at his apartment in San Francisco’s Mission District. The project was so new that they didn’t even have white boards. (Can you imagine?) They bought a few that day and got down to work.

Brockman says OpenAI will begin by exploring reinforcement learning, a way for machines to learn tasks by repeating them over and over again and tracking which methods produce the best results. But the other primary goal is what’s called “unsupervised learning”—creating machines that can truly learn on their own, without a human hand to guide them. Today, deep learning is driven by carefully labeled data. If you want to teach a neural network to recognize cat photos, you must feed it a certain number of examples—and these examples must be labeled as cat photos. The learning is supervised by human labelers. But like many others researchers, OpenAI aims to create neural nets that can learn without carefully labeled data.

“If you have really good unsupervised learning, machines would be able to learn from all this knowledge on the Internet—just like humans learn by looking around—or reading books,” Brockman says.

He envisions OpenAI as the modern incarnation of Xerox PARC, the tech research lab that thrived in the 1970s. Just as PARC’s largely open and unfettered research gave rise to everything from the graphical user interface to the laser printer to object-oriented programing, Brockman and crew seek to delve even deeper into what we once considered science fiction. PARC was owned by, yes, Xerox, but it fed so many other companies, most notably Apple, because people like Steve Jobs were privy to its research. At OpenAI, Brockman wants to make everyone privy to its research.

This month, hoping to push this dynamic as far as it will go, Brockman and company snagged several other notable researchers, including Ian Goodfellow, another former senior researcher on the Google Brain team. “The thing that was really special about PARC is that they got a bunch of smart people together and let them go where they want,” Brockman says. “You want a shared vision, without central control.”

Giving up control is the essence of the open source ideal. If enough people apply themselves to a collective goal, the end result will trounce anything you concoct in secret. But if AI becomes as powerful as promised, the equation changes. We’ll have to ensure that new AIs adhere to the same egalitarian ideals that led to their creation in the first place. Musk, Altman, and Brockman are placing their faith in the wisdom of the crowd. But if they’re right, one day that crowd won’t be entirely human.

Original article here.


standard

IoT Will Surpass Mobile Phones As Most Connected Devices

2016-08-05 - By 

By 2018, the number of IoT devices will surpass mobile phones in the greatest number of connected devices, so putting strains on mobile networks, according to new figures from Ericsson. The increasingly rapid adoption of IoT hold several challenges for IT departments.

Between 2015 and 2021, the internet of things(IoT) is expected to grow at a compounded annual growth rate (CAGR) of 23%, comprising close to 16 billion of the total forecast 28 billion connected devices in 2021.

The rapid growth and expansion also means that IoT devices are poised to surpass mobile phones as the largest category of connected devices in 2018, bringing with it a host of security and IT management challenges, according to new figures from the latestEricsson Mobility Report.

Within IoT, two major market segments with different requirements are developing. The report calls the first one massive IoT connections, and the second category critical IoT connections.

Massive IoT connections are characterized by high-connection volumes, low cost, low-energy consumption requirements, and small data traffic volumes. This means that these types of IoT connections can include a wide-range of categories, including smart buildings, transport logistics, fleet management, smart meters, and agriculture.

The Ericsson report notes that many things will be connected through capillary networks, which will leverage the ubiquity, security, and management of cellular networks. The result could create a lot of opportunity for IT, as well as challenges related to security and management.

Currently, about 70% of cellular IoT modules are GSM-only, with network mechanisms being implemented to foster extended network coverage for low-rate applications.

The second market segment — critical IoT connections — are characterized by requirements for ultra-reliability and availability, with very low latency, such as traffic safety, autonomous cars, industrial applications, remote manufacturing, and healthcare, including remote surgery.

The report noted LTE’s share of cellular IoT device penetration is about 5%, but it also forecast that cost reductions would make LTE-connected devices increasingly viable, enabling new, very low latency applications.

This transformation will be achieved by reducing complexity and limiting modems to IoT application capabilities. Evolved functionality in existing LTE networks, as well as 5G capabilities, are expected to extend the range of addressable applications for critical IoT deployments.

A separate report from CompTIA found 23% of channel companies report that they have already made money from IoT offerings, compared to only 8% in 2015.

Looking forward, one-third of channel firms expect to make money from IoT in the next 12 months, but stumbling blocks to widespread IoT deployments remain, and they’re projected to exist for the foreseeable future.

In particular, concerns about costs and return on investment, as well as technical hurdles and liability, privacy, security, and other regulatory matters are some of the major inhibitors.

Another hurdle to better IoT adoption that the Ericsson reported highlighted is that number of mobile subscriptions exceeds the population in many countries, largely due to inactive subscriptions, multiple device ownership, or optimization of subscriptions for different types of calls.

[Read why the feds spent $9 billion on IoT in 2015.]

This means the number of subscribers is lower than the number of subscriptions, according to the Ericsson report. There are currently around 5 billion subscribers compared to 7.4 billion subscriptions, with inactive subscriptions and multiple device ownership adding to the disparity.

The situation could cause issues for businesses trying to foster BYOD workplace, where IT security management is dependent on having a holistic view of mobile accounts accessing business servers.

“The complexity of IoT projects is beyond what many companies can handle internally, especially on the SMB end of the spectrum,” Seth Robinson, senior director of technology analysis at CompTIA, wrote in that firm’s report.

Nathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin. View Full Bio

Original article here.


standard

5 Ways Brexit Is Accelerating AWS And Public Cloud Adoption

2016-08-02 - By 
  • Deutsche Bank estimates AWS derives about 15% of its total revenue mix or has attained a $1.5B revenue run rate in Europe.
  • AWS is now approximately 6x the size of Microsoft Azure globally according to Deutsche Bank.

These and other insights are from the research note published earlier this month by Deutsche Bank Markets Research titled AWS/Cloud Adoption in Europe and the Brexit Impact written by Karl Keirstead, Alex Tout, Ross Sandler, Taylor McGinnis and Jobin Mathew. The research note is based on discussions the research team had with 20 Amazon Web Services (AWS) customers and partners at the recent AWS user conference held in London earlier this month, combined with their accumulated research on public cloud adoption globally.

These are the five ways Brexit will accelerate AWS and public cloud adoption:

  • The proliferation of European-based data centers is bringing public cloud stability to regions experiencing political instability. AWS currently has active regions in Dublin and Frankfurt, with the former often being used by AWS’ European customers due to the broader base of services offered there. An AWS Region is a physical geographic location where there is a cluster of data centers. Each region is made up of isolated locations known as availability zones. AWS is adding a third European Union (EU) region in the UK with a go-live date of late 2016 or early 2017. Microsoft has 2 of its 26 global regions in Europe, with two more planned in the UK. Google’s Cloud Platform (GCP) has just one region active in Europe. The following Data Center Map provides an overview of data centers AWS, Microsoft Azure and GCP have in Europe today and planned for the future.

  • Brexit is making data sovereignty king. European-based enterprises have long been cautious about using cloud platforms to store their many forms of data. Brexit is accelerating the needs European enterprises have for greater control over their data, especially those based in the UK. Amazon’s planned third EU region based in London scheduled to go live in late 2016 or early 2017 is well-timed to capitalize on this trend.
  • Up-front costs of utilizing AWS are much lower and increasingly trusted relative to more expensive on-premise IT platforms. Brexit is having the immediate effect of slowing down sales cycles for managed hosting, enterprise-wide hardware and software maintenance agreements. The research team found that the uncertainty of just how significant the economic impact Brexit will have on the European economies is making companies tighten capital expense (CAPEX) budgets and trim expensive maintenance agreements. UK enterprises are reverting to OPEX spending that is already budgeted.
  • CEOs are pushing CIOs to get out of high-cost hardware and on-premise software agreements to better predict operating costs faster thanks to Brexit. The continual pressure on CIOs to reduce the high hardware and software maintenance costs is accelerating thanks to Brexit. Because no one can quantify with precision just how Brexit will impact European economies, CEOs, and senior management teams want to minimize downside risk now. Because of this, the cloud is becoming a more viable option according to Deutsche Bank. One reseller said that public cloud computing platforms are a great answer to a recession, and their clients see Brexit as a catalyst to move more workloads to the cloud.
  • Brexit will impact AWS Enterprise Discount Program (EDP) revenues, forcing a greater focus on incentives for low-end and mid-tier services. Deutsche Bank Markets Research team reports that AWS has this special program in place for its very largest customers. Under an EDP, AWS will give price discounts to large customers that commit to a full year (or more) and pay upfront, in many cases with minimum volume increases. One AWS partner told Deutsche Bank that they’re aware of one EDP payment of $25 million. In the event of a recession in Europe, it’s possible that such payments could be at risk. These market dynamics will drive AWS to promote further low- and mid-tier services to attract new business to balance out these larger deals.

Original article here.


standard

The world’s largest SSD is now shipping for $10,000

2016-08-02 - By 

Even with today’s latest SSDs, sometimes you just need a little more space.

That’s where the Samsung PM1633a SSD comes in, clocking in at a massive 15.36 terabytes (or 15,360,000,000,000 bytes) of storage. Such power comes at a price, however, with preorders for the 15.36 terabyte behemoth coming in at around $10,000. The 15.36 TB size is not only the largest SSD ever made, but the largest single hard drive ever, finally breaking the 10 TB barrier that spinning disk drives seem to be capped at.

The drive’s small size and huge storage capacity means that you’d be able to outfit a standard 42U server storage rack with 1,008 PM1633a for a cool 15482.88 TB (over 15 petabytes) of storage, to presumably store a spare copy of the known universe. (Assuming you can afford the $10,080,000 price tag for such a setup.)

But while the PM1633a is primarily designed for enterprise customers for use in data centers or large server systems, there’s theoretically nothing stopping you (aside from price, anyway) from just buying one for your home desktop. Good luck filling all that space!

Original article here.


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);