Posted On:September 2016 - AppFerret

standard

IBM, Google, Facebook, Microsoft, Amazon form enormous AI partnership

2016-09-29 - By 

On Wednesday, the world learned of a new industry association called the Partnership on Artificial Intelligence, and it includes some of the biggest tech companies in the world. IBM, Google, Facebook, Microsoft, and Amazon have all signed on as marquis members, though the group hopes to expand even further over time. The goal is to create a body that can provide a platform for discussions among stakeholders and work out best practices for the artificial intelligence industry. Not directly mentioned, but easily seen on the horizon, is its place as the primary force lobbying for smarter legislation on AI and related future-tech issues.

Best practices can be boring or important, depending on the context, and in this case they are very, very important. Best practices could provide a framework for accurate safety testing, which will be important as researchers ask people to put more and more of their lives in the hands of AI and AI-driven robots. This sort of effort might also someday work toward a list of inherently dangerous and illegitimate actions or AI “thought” processes. One of its core goals is to produce thought leadership on the ethics of AI development.

So, this could end up being the bureaucracy that produces our earliest laws of robotics, if not the one that enforces them. The world “law” is usually used metaphorically in robotics. But with access to the lobbying power of companies like Google and Microsoft, we should expect the Partnership on AI to wade into discussions of real laws soon enough. For instance, the specifics of regulations governing self-driving car technology could still determine which would-be software standard will hit the market first. With the founding of this group, Google has put itself in a position to perhaps direct that regulation for its own benefit.

But, boy, is that ever not how they want you to see it. The group is putting in a really ostentatious level of effort to assure the world it’s not just a bunch of technology super-corps determining the future of mankind, like some sort of cyber-Bilderberg Group. The group’s website makes it clear that it will have “equal representation for corporate and non-corporate members on the board,” and that it “will share leadership with independent third-parties, including academics, user group advocates, and industry domain experts.”

Well, it’s one thing to say that, and quite another to live it. It remains to be seen if the group will actually comport itself as it will need to if it wants real support from the best minds in open source development. Below, the Elon Musk-associated non-profit research company OpenAI responds to the announcement with a rather passive-aggressive word of encouragement.

The effort to include non-profits and other non-corporate bodies makes perfect sense. There aren’t many areas in software engineering where you can claim to be the definitive authority if you don’t have the public on-board. Microsoft, in particular, is painfully aware of how hard it is to push a proprietary standard without the support of the open-source community. Not only will its own research be stronger and more diverse for incorporating the “crowd,” any recommendations it makes will carry more weight with government and far more weight with the public.

That’s why it’s so notable that some major players are absent from this early roll coll — most notably Apple and Intel. Apple haslong been known to be secretive about its AI research, even to the point of hurting its own competitiveness, while Intel has a history of treating AI as an unwelcome distraction. Neither approach is going to win the day, though there is an argument to be made that by remaining outside the group, Apple can still selfishly consume any insights it releases to the public.

Leaving such questions of business ethics aside, robot ethics remains a pressing problem. Self-driving cars illustrate exactly why, and the classic thought experiment involves a crowded freeway tunnel, with no room to swerve or time to brake. Seeing a crash ahead, your car must decide whether to swerve left and crash you into a pillar, or swerve right and save itself while forcing the car beside you right off the road itself. What is moral, in this situation? Would your answer change if the other car was carrying a family of five?

Right now these questions are purely academic. The formation of groups like this show they might not remain so for long.

Original article here.


standard

Investing in AI offers more rewards than risks

2016-09-27 - By 

It’s difficult to predict how artificial intelligence technology will change over the next 10 to 20 years, but there are plenty of gains to be made. By 2018, robots will supervise more than 3 million human workers; by 2020, smart machines will be a top investment priority for more than 30 percent of CIOs.

Everything from journalism to customer service is already being replaced by AI that’s increasingly able to replicate the experience and ability of humans. What was once seen as the future of technology is already here, and the only question left is how it will be implemented in the mass market.

Over time, the insights gleaned from the industries currently taking advantage of AI — and improving the technology along the way — will make it ever more robust and useful within a growing range of applications. Organizations that can afford to invest heavily in AI are now creating the momentum for even more to follow suit; those that can’t will find their niches in AI at risk of being left behind.

Risk versus reward

While some may argue it’s impossible to predict whether the risks of AI applications to business are greater than the rewards (or vice versa), analysts predict that by 2020, 5 percent of all economic transactions will be handled by autonomous software agents.

The future of AI depends on companies willing to take the plunge and invest, no matter the challenge, to research the technology and fund its continued development. Some are even doing it by accident, like the company that paid a programmer more than half a million dollars over six years, only to learn he automated his own job.

Many of the AI advancements are coming from the military. The U.S. government alone has requested $4.6 billion in drone funding for next year, as automated drones are set to replace the current manned drones used in the field. AI drones simply need to be given a destination and they’ll be able to dodge air defenses and reach the destinations on their own, while any lethal decisions are still made by human eyes.

On the academic side, institutions like the Massachusetts Institute of Technology and the University of Oxford are hard at work mapping the human brain and attempting to emulate it. This provides two different pathways — creating an AI that replicates the complexities of the human brain and emulating an actual human brain, which comes with a slew of ethical questions and concerns. For example, what rights does an AI have? And what happens if the server storing your emulated loved one is shut down?

While these questions remain unanswered, eventually, the proven benefits of AI systems for all industries will spur major players from all sectors of the economy to engage with it. It should be obvious to anyone that, just as current information technology is now indispensable to practically every industry in existence, artificial intelligence will be, as well.

The future of computation

Until now, AI has mostly been about crafting preprogramming tools for specific functions. These have been markedly rigid. These kinds of AI-based computing strategies have become commonplace. The future of AI will be dependent on true learning. In other words,AI will no longer have to rely on being given direct commands to understand what it’s being told to do.

Currently, we use GPS systems that depend on automated perception and learning, mobile devices that can interpret speech and search engines that are learning to interpret our intentions. Programming, specifically, is what makes developments like Google’s DeepMind and IBM’s Watson the next step in AI.

DeepMind wasn’t programmed with knowledge — there are no handcrafted programs or specific modules for given tasks. DeepMind is designed to learn automatically. The system is specifically crafted for generality so that the end result will be emergent properties. Emergent properties, such as the ability to program software that can beat grandmaster-level Go players, is incalculably more impressive when you realize no one programmed DeepMind to do it.

Traditional AI is narrow and can only do what it is programmed to know, but Olli, an automated car powered by Watson, learns from monitoring and interacting with passengers. Each time a new passenger requests a recommendation or destination, it stores this information for use with the next person. New sensors are constantly added, and the vehicle (like a human driver) continuously becomes more intelligent as it does its job.

But will these AI systems be able to do what companies like Google really want them to do, like predict the buying habits of end users better than existing recommendation software? Or optimizing supply chain transactions dynamically by relating patterns from the past? That’s where the real money is, and it’s a significantly more complex problem than playing games, driving and completing repetitive tasks.

The current proof points from various AI platforms — like finding fashion mistakes or predicting health problems — clearly indicate that AI is expanding, and these more complicated tasks will become a reality in the near-term horizon.

Soon, AI will be able to mimic complex human decision-making processes, such as giving investment advice or providing prescriptions to patients. In fact, with continuous improvement in true learning, first-tier support positions and more dangerous jobs (such as truck driving) will be completely taken over by robotics, leading to a new Industrial Revolution where humans will be freed up to solve problems instead of doing repetitious business processes.

The price of not investing in AI

The benefits and risks of investment are nebulous, uncertain and a matter for speculation. The one known risk common to all things new in business is uncertainty itself. So the risks mainly come in the form of making a bad investment, which is nothing new to the world of finance.

So as with all things strange and new, the prevailing wisdom is that the risk of being left behind is far greater, and far grimmer, than the benefits of playing it safe.

Original article here.

 


standard

Internet Of Things – Smart Cities Are The Future (Infographic)

2016-09-25 - By 

Just earlier this year, my city, Austin, TX applied for the federal Smart Cities Grant. This was a new opportunity for one American city to focus on future technologies and their impact on everything from traffic to environmental conservation. Unfortunately we didn’t get the $40 million grant, Columbus OH won, but at least smart cities are on the radar of the US government.

What is a smart city? How will smart technologies improve our urban life? Today’s graphic has those answers and more. First off, self-driving cars are no pie-in-the-sky achievement. Just recently the US government has published detailed requirements for autonomous vehicles. Uber has already started giving rides in self-driving cars in Philadelphia and could be worldwide by 2020.

Not just transportation will have a facelift, water conservation, green energy, pedestrian safety and real-time infrastructure will change. This means governments will be able to change traffic lights for emergency vehicles, remotely brighten street lights where accidents, or crime occur, and smart displays informing citizens on a multiple aspects of city news. Are smart cities our path to a better future? because the smart cities of tomorrow are starting to emerge now.

 

Original article here.


standard

40 great websites to learn something new every day

2016-09-23 - By 

A few decades ago, when you wanted to learn something new it typically meant spending a couple of evenings a week at a local school, taking a photography or bookkeeping class from a bored night school instructor.

Today, the worlds of learning and personal or professional development are literally at your fingertips.

To help you get started, here are 40 amazing places to learn something new:

1. Lynda: Where over 4 million people have already taken courses.

2. Your favorite publications: Make time to read and learn something new every day from your favorite blogs and online magazines.

3. CreativeLive: Get smarter and boost your creativity with free online classes.

4. Hackaday: Learn new skills and facts with bite-sized hacks delivered daily.

5. MindTools: A place to learn leadership skills (see more great places to learn leadership skills online here).

6. Codecademy: Learn Java, PHP, Python, and more from this reputable online coding school.

7. EdX: Find tons of MOOCs, including programming courses.

8. Platzi: Get smarter in marketing, coding, app development, and design.

9. Big Think: Read articles and watch videos featuring expert “Big Thinkers.”

10. Craftsy: Learn a fun, new skill from expert instructors in cooking, knitting, sewing, cake decorating, and more.

11. Guides.co: A massive collection of online guides on just about every topic imaginable.

12. LitLovers: Practice your love of literature with free online lit courses.

13. Lifehacker: One of my personal favorites!

14. Udacity: Learn coding at the free online university developed by Sebastien Thrun.

15. Zidbits: Subscribe to this huge collection of fun facts, weird news, and articles on a variety of topics.

16. TED Ed: The iconic TED brand brings you lessons worth sharing.

17. Scitable: Teach yourself about genetics and the study of evolution.

18. ITunes U:  Yale, Harvard, and other top universities share lecture podcasts.

19. Livemocha: Connect with other learners in over 190 countries to practice a new language.

20. MIT open courseware: To learn introductory coding skills; plus, check out these other places to learn coding for free.

21. WonderHowTo: New videos daily to teach you how to do any number of different things.

22. FutureLearn: Join over 3 million others taking courses in everything from health and history to nature and more.

23. One Month: Commit to learning a new skill over a period of one month with daily work.

24. Khan Academy: One of the biggest and best-known gamified online learning platforms.

25. Yousician: Who said when you learn something new it has to be work-related?

26. Duolingo: A completely free, gamified language learning site (find more language learning sites here).

27. Squareknot: Get creative with other creatives.

28. Highbrow: A subscription service that delivers five-minute courses to your email daily.

29. Spreeder: How cool would it be to be able to speed read?

30. Memrise: Get smarter and expand your vocabulary.

31. HTML5 Rocks: Google pro contributors bring you the latest updates, resource guides, and slide decks for all things HTML5.

32. Wikipedia’s Daily Article List: Get Wikipedia’s daily featured article delivered right to your inbox.

33. DataMonkey: The ability to work with data is indispensable. Learn SQL and Excel.

34. Saylor Academy: Offers a great public speaking course you can take online, and see more free public speaking courses here.

35. Cook Smarts: Learn basic to advanced food prep and cooking techniques.

36. The Happiness Project: Why not just learn how to be happy? I’d give five minutes a day to that!

37. Learni.st: Expertly curated courses with the option of premium content.

38. Surface Languages: A good choice if you just need to learn a few phrases for travel.

39. Academic Earth: Offering top quality university-level courses since 2009.

40. Make: Learn how to do that DIY project you’ve had your eye on.

There’s no reason you can’t learn something new every day, whether it’s a work skill, a fun new hobby, or even a language!

Original article here.

 


standard

Big Data and Cloud – Are You Ready to Embrace Both?

2016-09-23 - By 

This week’s Economist magazine has the cover story about Uber; the world’s most valuable startup that symbolizes disruptive innovation. The race to reinvent transportation service worldwide is so fast that it’ll dramatically change the way we travel, in the next 5-10 years. While studying the success study of Uber, I was more interested in factors that led to the exceptional growth of the company – spreading to 425 global cities in 7 years, with a market cap of $70 billion.

There are surely multiple factors that contributed to its success, but what made me surprised was its capitalization of data analytics. In 2014, Uber launched UberPool, which uses algorithms to match riders based on location and sets the price based on the likelihood of picking up another passenger. It analyzes consumers’ transaction history and spending patterns and provides intelligent recommendations for personalized services.

Uber is just one example; thousands of enterprises have already embraced big data and predictive analytics for HR management, hiring, financial management, and employee relations management. Latest startups are already leveraging analytics to bring data-driven and practical recommendations for the market. However, this does not mean that situation is ideal.

According to MIT Technology Review, roughly 0.5 percent of digital data is analyzed, which means, companies are losing millions of opportunities to make smart decisions, improve efficiency, attract new prospects and achieve business goals. The reason is simple; they are not leveraging the potential offered by data analytics.

Though the percentage of data being analyzed is disappointing, research endorses the growing realization in businesses about the adoption of analytics. By 2020, around 1.7 megabytes of new information will be created every single second, for every human being on the planet.

Another thing that is deeply associated with the growing data asset is a cloud. As the statistics endorse, data creation is on the rise; it’ll lead to storage and security issues for the businesses. Though there are five free cloud services, the adoption rate is still disappointing.

When we explore why big data analysis is lagging behind and how to fix the problem, it’s vital to assess the storage services too. Though there are organizations that have been using cloud storage for years, the adoption of the same is slow. It’s usually a good option to host general data on the cloud while keeping sensitive information on the premise.

Big Data and Cloud for Business:

As we noted in the previous post, private cloud adoption increased from 63% to 77%, which has driven hybrid cloud adoption up from 58% to 71% year-over-year. There are enough reasons and stats to explain the need for cloud storage and big data analytics for small businesses. Here are three fundamental reasons why companies need some reliable cloud technology to carry out big data analytics exercise.

1. Cost:

Looking at the available options at this point, there are two concerns. Some are either too costly and time-consuming or just unreliable and insecure. Without a clear solution, the default has been to do the bare minimum with the available data. If we can successfully integrate data into the cloud, the ultimate cost of both (storage & analytics) services will turn flat and benefit the business.

2. Security:

We have already discussed that companies have a gigantic amount of data, but they have no clue as to what to do with it. The first thing they need is to keep their data in a secure environment where no breach could occur. Look at recent revelations about Dropbox hack, which is now being reported to have happened. It affected over 65 million accounts associated with the service. Since moving significant amounts of data in and out of the cloud comes with security risks, one has to ensure that the cloud service he/she is using is reliable.

See, there are concerns and risks but thanks to big players IBM, Microsoft, and Google; trust in cloud services is increasing day by day and adoption is on the rise.

3. Integration:

If you look at the different sales, marketing, and social media management tools, they all offer integration with other apps. For example, you can integrate Facebook with MailChimp, Salesforce with MailChimp; which means, your (marketing/sales) cloud offers two-in-one service. It not only processes your data and provides analytics but also ensures that findings and data remain in a secure environment.

4. Automation:

Once you can remove uncertainty, and find a reliable but cost-effective solution for the business, the next comes is feature set. There are cloud services that offer wider automation features, enabling users to save their time and use it for some more important stuff. Data management, campaign management, data downloads, real-time analytics, automatic alerts, and drip management are some of the key automation features that any data analytics architect will be looking forward to.

While integrating cloud with data analytics, make sure that it serves your purpose while keeping the cost under control. Otherwise, the entire objective of the exercise will be lost. As big data becomes an integral part of any business, data management applications will turn user-friendlier and equally affordable. It is a challenge, but there are a lot of opportunities for small businesses to take big data into account and achieve significant results.

Original article here.


standard

All the National Food Days

2016-09-22 - By 

Once I paid attention, it started to feel like there were a whole lot of national food and drink days in the United States. National Chili Dog Day. National Donut Day. National Beer Day. I’m totally for this, as I will accept any excuse to consume any of these items. But still, there seems to be a lot.

According to this list, 214 days of the year are a food or drink day. Every single day of July was one. How is one to keep track? I gotta plan, you know?

So here are all the days in calendar form. Today, August 18, is National Pinot Noir Day. Tomorrow is National Potato Day.

Click Image or here to see Interactive Graphic.

Hold on to your britches next week. Every single day has you covered, and they’re all desserts.

 

Nerd Notes

  • Some calendar days have more than one claiming food or drink item, which makes for a list of 302 items. I only showed the first, out of convenience.
  • Some food items can cover multiple categories. In these cases, I just went with my gut.
  • I made this with d3.js and a little bit of R. This calendar example by Kathy Zhou got me most of the way there.

Original article here.


standard

Same Raw Poll Data – Different Results?

2016-09-22 - By 

You’ve heard of the “margin of error” in polling. Just about every article on a new poll dutifully notes that the margin of error due to sampling is plus or minus three or four percentage points.

But in truth, the “margin of sampling error” – basically, the chance that polling different people would have produced a different result – doesn’t even come close to capturing the potential for error in surveys.

Polling results rely as much on the judgments of pollsters as on the science of survey methodology. Two good pollsters, both looking at the same underlying data, could come up with two very different results.

How so? Because pollsters make a series of decisions when designing their survey, from determining likely voters to adjusting their respondents to match the demographics of the electorate. These decisions are hard. They usually take place behind the scenes, and they can make a huge difference.

To illustrate this, we decided to conduct a little experiment. On Monday, in partnership with Siena College, the Upshot published a pollof 867 likely Florida voters. Our poll showed Hillary Clinton leading Donald J. Trump by one percentage point.

We decided to share our raw data with four well-respected pollsters and asked them to estimate the result of the poll themselves.

Here’s who joined our experiment:

Charles Franklin, of the Marquette Law School Poll, a highly regarded public poll in Wisconsin.

Patrick Ruffini, of Echelon Insights, a Republican data and polling firm.

Margie Omero, Robert Green and Adam Rosenblatt, of Penn Schoen Berland Research, a Democratic polling and research firm that conducted surveys for Mrs. Clinton in 2008.

Sam Corbett-Davies, Andrew Gelman and David Rothschild, of Stanford University, Columbia University and Microsoft Research. They’re at the forefront of using statistical modeling in survey research.

Here’s what they found:

PollsterClintonTrumpMargin
Charles Franklin
Marquette Law
42%39%Clinton +3%
Patrick Ruffini
Echelon Insights
39%38%Clinton +1%
Omero, Green, Rosenblatt
Penn Schoen Berland Research
42%38%Clinton +4%
Corbett-Davies, Gelman, Rothschild
Stanford University/Columbia University/Microsoft Research
40%41%Trump +1%
NYT Upshot/Siena College
 
41%40%Clinton +1%

Well, well, well. Look at that. A net five-point difference between the five measures, including our own, even though all are based on identical data. Remember: There are no sampling differences in this exercise. Everyone is coming up with a number based on the same interviews.

Their answers shouldn’t be interpreted as an indication of what they would have found if they had conducted their own survey. They all would have designed the survey at least a little differently – some almost entirely differently.

But their answers illustrate just a few of the different ways that pollsters can handle the same data – and how those choices can affect the result.

So what’s going on? The pollsters made different decisions in adjusting the sample and identifying likely voters. The result was four different electorates, and four different results.

PollsterResultWhiteHisp.BlackSample
Charles Franklin
Marquette Law
Clinton +368%15%10%+5 Dem.
Patrick Ruffini
Echelon Insights
Clinton +167%14%12%+1 Dem.
Omero, Green, Rosenblatt
Penn Schoen Berland Research
Clinton +465%15%12%+4 Dem.
Corbett-Davies, Gelman, Rothschild
Stanford University/Columbia University/Microsoft Research
Trump +170%13%14%+1 Rep.
NYT Upshot/Siena College
 
Clinton +169%14%12%+1 Rep.

There are two basic kinds of choices that our participants are making: one about adjusting the sample and one about identifying likely voters.

How to make the sample representative?

Pollsters usually make statistical adjustments to make sure that their sample represents the population – in this case, voters in Florida. They usually do so by giving more weight to respondents from underrepresented groups. But this is not so simple.

What source? Most public pollsters try to reach every type of adult at random and adjust their survey samples to match the demographic composition of adults in the census. Most campaign pollsters take surveys from lists of registered voters and adjust their sample to match information from the voter file.

Which variables? What types of characteristics should the pollster weight by? Race, sex and age are very standard. But what about region, party registration, education or past turnout?

How? There are subtly different ways to weight a survey. One of our participants doesn’t actually weight the survey in a traditional sense, but builds a statistical model to make inferences about all registered voters (the same technique that yields our pretty dot maps).

Who is a likely voter?

There are two basic ways that our participants selected likely voters:

Self-reported vote intention Public pollsters often use the self-reported vote intention of respondents to choose who is likely to vote and who is not.

Vote history Partisan pollsters often use voter file data on the past vote history of registered voters to decide who is likely to cast a ballot, since past turnout is a strong predictor of future turnout.

Our participants’ choices

The participants split across all these choices.

PollsterWho is Likely Voter?Type of weightTries to match…
Charles Franklin
Marquette Law
Self-reportTraditionalCensus
Patrick Ruffini
Echelon Insights
Vote historyTraditionalVoter File
Omero, Green, Rosenblatt
Penn Schoen Berland Research
Self-reportTraditionalVoter File
Corbett-Davies, Gelman, Rothschild
Stanford University/Columbia University/Microsoft Research
Vote historyModelVoter File
NYT Upshot/Siena College
 
Report + historyTraditionalVoter File

Their varying decisions on these questions add up to big differences in the result. In general, the participants who used vote history in the likely-voter model showed a better result for Mr. Trump.

At the end of this article, we’ve posted detailed methodological choices of each of our pollsters. Before that, a few of my own observations from this exercise:

• These are all good pollsters, who made sensible and defensible decisions. I have seen polls that make truly outlandish decisions with the potential to produce even greater variance than this.

• Clearly, the reported margin of error due to sampling, even when including a design effect (which purports to capture the added uncertainty of weighting), doesn’t even come close to capturing total survey error. That’s why we didn’t report a margin of error in our original article.

• You can see why “herding,” the phenomenon in which pollsters make decisions that bring them close to expectations, can be such a problem. There really is a lot of flexibility for pollsters to make choices that generate a fundamentally different result. And I get it: If our result had come back as “Clinton +10,” I would have dreaded having to publish it.

• You can see why we say it’s best to average polls, and to stop fretting so much about single polls.

Finally, a word of thanks to the four pollsters for joining us in this exercise. Election season is as busy for pollsters as it is for political journalists. We’re grateful for their time.

Below, the methodological choices of the other pollsters.

Charles Franklin Clinton +3

Marquette Law

Mr. Franklin approximated the approach of a traditional pollster and did not use any of the information on the voter registration file. He weighed the sample to an estimate of the demographic composition of Florida’s registered voters in 2016, based on census data, by age, sex, education, gender and race. Mr. Franklin’s likely voters were those who said they were “almost certain” to vote.

Patrick Ruffini Clinton +1

Echelon Insights

Mr. Ruffini weighted the sample by voter file data on age, race, gender and party registration. He next added turnout scores: an estimate for how likely each voter is to turn out, based exclusively on their voting history. He then weighted the sample to the likely turnout profile of both registered and likely voters – basically making sure that there were the right number of likely and unlikely voters in the voter file. This is probably the approach most similar to the Upshot/Siena methodology, so it is not surprising that it also is the closest result.

Sam Corbett-Davies, Andrew Gelman and David Rothschild Trump +1

Stanford University/Columbia University/Microsoft Research

Long story short: They built a model that tries to figure out what characteristics predict support for Mrs. Clinton and Mr. Trump based on many of the same variables used for weighting. They then predicted how every person in the state would vote, based on that model. It’s the same approach we used to make the pretty dot maps of Florida. The likely electorate was determined exclusively by vote history, not self-reported voice choice. They included 2012 voters – which is why their electorate has more black voters than the others – and then included newly registered voters according to a model of voting history based on registration.

Margie Omero, Robert Green, Adam Rosenblatt Clinton +4

Penn Schoen Berland Research

The sample was weighted to state voter file data for party registration, gender, race and ethnicity. They then excluded the people who said they were unlikely to vote. These self-reported unlikely voters were 7 percent of the sample, so this is the most permissive likely voter screen of the groups. In part as a result, it’s also Mrs. Clinton’s best performance. In an email, Ms. Omero noted that every scenario they examined showed an advantage for Clinton.

Original article here.


standard

5 ways artificial intelligence will change enterprise IT

2016-09-12 - By 

It’s been a busy summer in the artificial intelligence (A.I.) space, but the most interesting A.I. opportunities may not come from the biggest names.

You may have heard about Tesla’s self-driving cars that made headlines twice, for vastly different reasons — a fatal crash in Florida in which the driver was using the Autopilot software, and claims by a Missouri man that the feature drove him 20 miles to a hospital after he suffered a heart attack, saving his life.

Or you might have heard of Apple spending $200 million to acquire machine learning and A.I. startup Turi. A smart drone defeated an experienced Air Force pilot in flight simulation tests. IBM’s Watson diagnosed a 60-year-old woman’s rare form of leukemia within 10 minutes, after doctors had been stumped for months.

 

But believe it or not, enterprise IT is also a fertile ground for A.I. In fact, some of the most immediate and profound use cases for A.I. will come as companies increasingly integrate it into their data centers and development organizations to automate processes that have been done manually for decades.

Here are five examples:

1. Predicting software failures

Research at Harvard University’s T.H. Chan School of Public Health shows it may one day be possible to use A.I. algorithms to evaluate a person’s risk of cardiovascular disease. The study is testing whether these algorithms can draw connections between how patients describe their symptoms and the likelihood that they have the disease. The algorithms could lead to the development of a lexicon to interpret symptoms better and make more accurate diagnoses faster.

In a similar way, A.I. algorithms will be able to review and understand log files throughout the IT infrastructure in ways that are impossible to do with traditional methods and which could be able to predict a system crash minutes or hours before a human might notice anything was wrong.

2. Detecting cybersecurity issues

The list of high-profile cyberattacks over the last couple of years keeps growing and includes the theft of the confidential data of tens of millions of Target customers during the height of the holiday shopping season in 2015, the server breach at the U.S. Office of Personnel Management that compromised the sensitive personal information of about 21.5 million people, and the recent infiltration of the computer network of the Democratic National Committee by Russian government hackers.

Artificial intelligence holds great promise with its ability to learn patterns of networks, devices, and systems and decode deviations that could reveal in-progress attacks. A crop of startups is focused on these approaches, including StackPath — founded by entrepreneur Lance Crosby, who sold his previous company, cloud infrastructure startup SoftLayer, to IBM in 2013 for $2 billion.

The DefenseAdvanced Research Projects Agency, or DARPA (the agency that helped create the internet), recently sponsored a contest in which seven fully autonomous A.I. bots found security vulnerabilities hiding in vast amounts of code. The next day, the winning bot was invited to compete against the best human hackers in the world. It beat some of the human teams at various points in the competition.

A.I. just might hold the key to finally beating the hackers.

3. Creating super-programmers

The fictional superhero Tony Stark in the “Iron Man” movies relies on a powered suit of armor to protect the world. A.I. could offer similar capabilities to just-out-of-college software developers.

We all know Siri. Under the hood, she’s an A.I. neural network trained on vast amounts of human language. When we ask her directions to McDonald’s (not that I would admit that I do that sort of thing), she “understands” the intention behind the English words.

Imagine a neural network trained to understand all the source code stored in GitHub. That’s tens of millions of lines of code. Or what about the entire history of projects as robust as the Linux operating system? What if Siri could “understand” the intention behind any piece of code?

Just as Tony Stark depends on technology to get his job done, ordinary programmers will be able to turn to A.I. to help them do their jobs far better than they could on their own.

4. Making sense of the Internet of Things

Recent research forecasts that A.I. and machine learning in Big Data and IoT will reach $18.5 billion by 2021. It’s no wonder. The idea of devices, buildings, and everyday objects being interconnected to make them smarter and more responsive brings with it unprecedented complexity.

It’s a data problem. As IoT progresses, the amount of unstructured machine data will far exceed our ability to make sense of it with traditional methods.

Organizations will have to turn to A.I. for help in culling these billions of data points for actionable insights.

5. Robots in data centers

Ever seen this cool video of robots working in an Amazon distribution center? The same is coming to large corporate data centers. Yes, physical robots will handle maintenance tasks such as swapping out server racks.

According to a news report, companies such as IBM and EMC are already using iRobot Create, a customizable version of Roomba, to deploy robots that zoom around data centers and keep track of environmental factors such as temperature, humidity, and airflow.

Self-driving cars are far from the only advances pushing A.I. boundaries. The innovations in enterprise IT may be happening behind the scenes, but they’re no less dramatic.


standard

8 digital skills we must teach our children

2016-09-11 - By 

The social and economic impact of technology is widespread and accelerating. The speed and volume of information have increased exponentially. Experts are predicting that 90% of the entire population will be connected to the internet within 10 years. With the internet of things, the digital and physical worlds will soon be merged. These changes herald exciting possibilities. But they also create uncertainty. And our kids are at the centre of this dynamic change.

Children are using digital technologies and media at increasingly younger ages and for longer periods of time. They spend an average of seven hours a day in front of screens – from televisions and computers, to mobile phones and various digital devices. This is more than the time children spend with their parents or in school. As such, it can have a significant impact on their health and well-being. What digital content they consume, who they meet online and how much time they spend onscreen – all these factors will greatly influence children’s overall development.

The digital world is a vast expanse of learning and entertainment. But it is in this digital world that kids are also exposed to many risks, such as cyberbullying, technology addiction, obscene and violent content, radicalization, scams and data theft. The problem lies in the fast and ever evolving nature of the digital world, where proper internet governance and policies for child protection are slow to catch up, rendering them ineffective.

Moreover, there is the digital age gap. The way children use technology is very different from adults. This gap makes it difficult for parents and educators to fully understand the risks and threats that children could face online. As a result, adults may feel unable to advise children on the safe and responsible use of digital technologies. Likewise, this gap gives rise to different perspectives of what is considered acceptable behaviour.

So how can we, as parents, educators and leaders, prepare our children for the digital age? Without a doubt, it is critical for us to equip them with digital intelligence.

Digital intelligence or “DQ” is the set of social, emotional and cognitive abilities that enable individuals to face the challenges and adapt to the demands of digital life. These abilities can broadly be broken down into eight interconnected areas:

Digital identity: The ability to create and manage one’s online identity and reputation. This includes an awareness of one’s online persona and management of the short-term and long-term impact of one’s online presence.

Digital use: The ability to use digital devices and media, including the mastery of control in order to achieve a healthy balance between life online and offline.

Digital safety: The ability to manage risks online (e.g. cyberbullying, grooming, radicalization) as well as problematic content (e.g. violence and obscenity), and to avoid and limit these risks.

Digital security: The ability to detect cyber threats (e.g. hacking, scams, malware), to understand best practices and to use suitable security tools for data protection.

Digital emotional intelligence: The ability to be empathetic and build good relationships with others online.

Digital communication: The ability to communicate and collaborate with others using digital technologies and media.

Digital literacy: The ability to find, evaluate, utilize, share and create content as well as competency in computational thinking.

Digital rights: The ability to understand and uphold personal and legal rights, including the rights to privacy, intellectual property, freedom of speech and protection from hate speech.

Above all, the acquisition of these abilities should be rooted in desirable human values such as respect, empathy and prudence. These values facilitate the wise and responsible use of technology – an attribute which will mark the future leaders of tomorrow. Indeed, cultivating digital intelligence grounded in human values is essential for our kids to become masters of technology instead of being mastered by it.

Original article here.


standard

PUBLIC CLOUD MARKET TO EXCEED $236B BY 2020

2016-09-09 - By 

The biggest disruptive force in the global tech market over the past two decades is about to get a lot bigger.

In a new report, researchers at Forrester predict the public cloud services market will grow to $236 billion by 2020, more than double the $114 billion public cloud spend worldwide this year.

This dramatic uptick—at an annual growth rate of 23 percent—reflects the massive IT modernization effort amid private sector companies and, to a lesser extent, government.

» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.

According to Forrester, North American and European countries have migrated and are already running approximately 18 percent of their custom-built application software on public cloud platforms, and businesses are increasingly inclined to rent processing and storage from vendors rather than stand up infrastructure themselves.

Forrester also reports many companies are “challenging the notion that public clouds are not suited for core business applications,” opting to move mission-critical workloads to the public cloud for increased agility and efficiency despite perceived myths that public cloud platforms aren’t as secure as internal data centers might be.

Forrester compares the public cloud of today to adolescent children on the fast track to adulthood, suggesting it will be “the dominant technology model in a little over three years.”

“Today’s public cloud services are like teenagers—exuberant, sometimes awkward, and growing rapidly,” the report states. “By 2020, public cloud services will be like adults, with serious [enterprise] responsibilities and slower growth.”

The report has many implications for government. The Obama administration’s fiscal 2017 budget calls for $7.3 billion in spending on provisioned services like cloud computing, and true federal cloud spending is on the rise across civilian, military and intel agencies. An analysis from big data and analytics firm Govini states the federal government spent $3.3 billion on cloud in fiscal 2015 on the backs of infrastructure-as-a-service offerings.

Federal agencies have inched forward in cloud, first with email-as-a-service offerings and later with a growing number of infrastructure-, platform- and software-as-a-service offerings, but they’ve been slowed in part by lagging legacy technologies that tend to make up their enterprise.

The government’s aging systems—some of which date back to the 1970s—are in dire need of modernization, and Congress is currently reviewing legislation that could greatly speed up the effort. One initiative would create a $3.1 billion IT Modernization Fund from which agencies could borrow against, and another that would direct agencies to establish working capital funds for IT.

If either piece of legislation is enacted—or some combination of both—the government’s spend on cloud computing is likely to increase and fall more in line with what industry is doing, using cloud computing as the base for IT enterprises.

Original article here.


standard

What’s The Return On Developing In The Cloud?

2016-09-09 - By 

Cloud integration has come and gone; the cloud now fully envelops modern business. However, gauging a return on investment (ROI) for the cloud is difficult and oftentimes too subjective, leaving many businesses in the dark about whether they spent their time, money, and energies wisely … or if they should even consider using the cloud to run their applications. Luckily, we can shine a light on cloud ROI once and provide some guidance based on what we’ve seen with our customers.

Understand the impact of the cloud on your corporation

According to a report by RightScale, 93% of businesses use the cloud in some form. It’s safe to say cloud computing is a mainstay. But after the initial struggle of cloud adoption, many businesses’ tangible ROIs fell short of expectations. From IT teams that lacked knowledge about the right Web application program interfaces to using cloud technology for more than critical functions, companies stretched the cloud to its limits – and then found they couldn’t gauge accurate returns.

Simple cloud ROI calculators in hardware or software form will provide inaccurate results. Take the time to measure utilization among your servers, look at network consumption, and adjust your efforts accordingly. Elasticity in the cloud allows you to resize cloud instances to meet your disk usage measurements.

Platform as a Service (PaaS) is seen in the industry as a cloud computing service model that helps organizations create and test apps without changing existing architectural landscapes. Many businesses just beginning to understand the value PaaS delivers are trying to calculate how PaaS helps business growth. But there’s no set formula to calculate cloud ROIs; each corporation must analyze the benefits and determine the value for itself based on its business goals – whether for today or for three years from now. However, this doesn’t mean there aren’t tools out there today to make specific calculations possible.

Measure your cloud ROI accurately

To measure cloud ROI with any degree of accuracy, you must look at the change in your technical infrastructure from different standpoints – both tangible and non-tangible. Consider the financial returns that are clearly tangible: better use of resources with fewer FTEs for manual tasks, greater scalability, etc. Non-tangible return factors would consist of speed, reliability, user friendliness, and risk management. Computing cloud ROI requires a holistic view of your infrastructure and an assessment of how it transforms your enterprise as a whole.

Look at your corporation’s cloud computing goals and benefits differently from other technology adoptions. Factor in the value of the cloud’s competitive advantage, which most would agree is agility. Agility is a benefit that is relative to each business and is a reason why companies opt to use a PaaS solution. There are others. IT managers need to step back from cloud computing systems to analyze and assign value to several individual points. These include:

  • Speed of development and increased productivity: A main draw for integrating the cloud is to streamline and enhance employee productivity, so it’s important to assess boosts in organizational speed and agility. 
  • Streamlined costs and increased profits: Measuring expenditures is the only way to find your ROI. Look at the actual costs of cloud migration in dollar amounts, as well as the opportunity costs, then see where your company can trim the fat. 
  • Improvements in customer service: The cloud gives businesses the ability to respond better and faster to customer problems, making it easy to expand customer reach and build retention. 
  • Additional opportunities for innovation and growth: The cloud provides scalability to meet corporate needs, as well as the increased ability to create and test new ideas and solutions. 

Each of these benefits comes with an ROI measure. Assign an estimated dollar amount to each benefit, then consider other benefits that may result from cloud integration, such as headcount reductions and improvements in market intelligence. Think architecturally when calculating cloud ROI, and don’t forget that cloud benefits can multiply and compound – improving one area of business often enhances other areas.

Original article here.


standard

104 Photo Editing Tools You Should Know About

2016-09-08 - By 

Hello, photographers. For the last two months, I’ve been doing market research for my projectPhotolemur and looking for different tools in the area of photo enhancement and photo editing. I spent a lot of time searching, and came up with a large organized list of 104 photo editing tools and apps that you should know about.

I believe all these services might be useful for some photographers, so I’ll share them here with you. And just to make it easier to find something specific, the list is numbered. Enjoy!

Table of contents

  • Photo enhancers (1-3)
  • Online editors (4-21)
  • Free desktop editors (22-26)
  • Paid desktop editors (27-40)
  • HDR photo editors (41-53)
  • Cross-platform image editors (54-57)
  • Photo filters (58-66)
  • Photo editing mobile apps (67-85)
  • RAW processors (86-96)
  • Photo viewers and managers (97-99)
  • Other (100-104)

Photo enhancers

1. Photolemur – The world’s first fully automated photo enhancement solution. It is powered by a special AI algorithm that fixes imperfections on images without human involvement (beta).

2. Softcolorsoftware – Automatic photo editor for batch photo enhancing, editing and color management.

3. Perfectly Clear – Photo editor with a set of automatic correction presets for Windows&Mac ($149)

Online editors

4. Pixlr – High-end photo editing and quick filtering – in your browser (free)

5. Fotor – Overall photo enhancement in an easy-to-use package (free)

6. Sumopaint – The most versatile photo editor and painting application that works in a browser (free)

7. Irfanview – An image-viewer with added batch editing and conversion. rename a huge number of files in seconds, as well as resize them. Freeware (for non-commercial use)

8. Lunapic – Just a simple free online editor

9. Photos – Photo viewing and editing app for OS X and comes free with the Yosemite operating system (free)

10. Fastone – Fast, stable, user-friendly image browser, converter and editor. provided as freeware for personal and educational use.

11. Pics.io – Very simple online photo editor (free)

12. Ribbet – Ribbet lets you edit all your photos online, and love every second of it (free)

13. PicMonkey – One of the most popular free online picture editors

14. Befunky – Online photo editor and collage maker (free)

15. pho.to – Simple online photo editor with pre-set photo effects for an instant photo fix (free)

16. pizap – Online photo editor and collage maker ($29.99/year)

17. Fotostars – Edit your photos using stylish photo effects (free)

18. Avatan – Free online photo editor & collage maker

19. FotoFlexer – Photo editor and advanced photo effects for free

20. Picture2life is an Ajax based photo editor. It’s focused on grabbing and editing images that are already online. The tool selection is average, and the user interface is poor.

21. Preloadr is a Flickr-specific tool that uses the Flickr API, even for account sign-in. The service includes basic cropping, sharpening, color correction and other tools to enhance images.


Free desktop editors

22. Photoscape – A simple, unusual editor that can handle more than just photos

23. Paint.net – Free image and photo editing software for PC

24. Krita – Digital painting and illustration application with CMYK support, HDR painting, G’MIC integration and more

25. Imagemagick – A software suite to create, edit, compose or convert images on the command line.

26. G’MIC – Full featured framework for image processing with different user interfaces, including a GIMP plugin to convert, manipulate, filter, and visualize image data. Available for Windows and OS

Paid desktop editors

27. Photoshop – Mother of all photo editors ($9.99/month)

28. Lightroom – A photo processor and image organizer developed by Adobe Systems for Windows and OS X ($9.99/month)

29. Capture One – Is a professional raw converter and image editing software designed for professional photographers who need to process large volumes of high quality images in a fast and efficient workflow (279 EUR)

30. Radlab – Combines intuitive browsing, gorgeous effects and a lightning-fast processing engine for image editing ($149)


standard

Statistical terms used in research studies: A primer for media

2016-09-07 - By 

When assessing academic studies, media members are often confronted by pages not only full of numbers, but also loaded with concepts such as “selection bias,” “p-value” and “statistical inference.”

Statistics courses are available at most universities, of course, but are often viewed as something to be taken, passed and quickly forgotten. However, for media members and public communicators of many kinds it is imperative to do more than just read study abstracts; understanding the methods and concepts that underpin academic studies is essential to being able to judge the merits of a particular piece of research. Even if one can’t master statistics, knowing the basic language can help in formulating better, more critical questions for experts, and it can foster deeper thinking, and skepticism, about findings.

Further, the emerging field of data journalism requires that reporters bring more analytical rigor to the increasingly large amounts of numbers, figures and data they use. Grasping some of the academic theory behind statistics can help ensure that rigor.

Most studies attempt to establish a correlation between two variables — for example, how having good teachers might be “associated with” (a phrase often used by academics) better outcomes later in life; or how the weight of a car is associated with fatal collisions. But detecting such a relationship is only a first step; the ultimate goal is to determinecausation: that one of the two variables drives the other. There is a time-honored phrase to keep in mind: “Correlation is not causation.” (This can be usefully amended to “correlation is not necessarily causation,” as the nature of the relationship needs to be determined.)

Another key distinction to keep in mind is that studies can either explore observed data (descriptive statistics) or use observed data to predict what is true of areas beyond the data (inferential statistics). The statement “From 2000 to 2005, 70% of the land cleared in the Amazon and recorded in Brazilian government data was transformed into pasture” is a descriptive statistic; “Receiving your college degree increases your lifetime earnings by 50%” is an inferential statistic.

Here are some other basic statistical concepts with which journalism students and working journalists should be familiar:

  • A sample is a portion of an entire population. Inferential statistics seek to make predictions about a population based on the results observed in a sample of that population.
  • There are two primary types of population samples: random and stratified. For a random sample, study subjects are chosen completely by chance, while a stratified sample is constructed to reflect the characteristics of the population at large (gender, age or ethnicity, for example). There are a wide range of sampling methods, each with its advantages and disadvantages.
  • Attempting to extend the results of a sample to a population is called generalization. This can be done only when the sample is truly representative of the entire population.
  • Generalizing results from a sample to the population must take into account sample variation. Even if the sample selected is completely random, there is still a degree of variance within the population that will require your results from within a sample to include a margin of error. For example, the results of a poll of likely voters could give the margin of error in percentage points: “47% of those polled said they would vote for the measure, with a margin of error of 3 percentage points.” Thus, if the actual percentage voting for the measure was as low as 44% or as high as 50%, this result would be consistent with the poll.
  • The greater the sample size, the more representative it tends to be of a population as a whole. Thus the margin of error falls and the confidence level rises.
  • Most studies explore the relationship between two variables — for example, that prenatal exposure to pesticides is associated with lower birthweight. This is called the alternative hypothesis. Well-designed studies seek to disprove the null hypothesis — in this case, that prenatal pesticide exposure is not associated with lower birthweight.
  • Significance tests of the study’s results determine the probability of seeing such results if the null hypothesis were true; the p-value indicates how unlikely this would be. If the p-value is 0.05, there is only a 5% probability of seeing such “interesting” results if the null hypothesis were true; if the p-value is 0.01, there is only a 1% probability.
  • The other threat to a sample’s validity is the notion of bias. Bias comes in many forms but most common bias is based on the selection of subjects. For example, if subjects self-select into a sample group, then the results are no longer externally valid, as the type of person who wants to be in a study is not necessarily similar to the population that we are seeking to draw inference about.
  • When two variables move together, they are said to be correlated. Positive correlation means that as one variable rises or falls, the other does as well — caloric intake and weight, for example. Negative correlationindicates that two variables move in opposite directions — say, vehicle speed and travel time. So if a scholar writes “Income is negatively correlated with poverty rates,” what he or she means is that as income rises, poverty rates fall.
  • Causation is when change in one variable alters another. For example, air temperature and sunlight are correlated (when the sun is up, temperatures rise), but causation flows in only one direction. This is also known as cause and effect.
  • Regression analysis is a way to determine if there is or isn’t a correlation between two (or more) variables and how strong any correlation may be. At its most basic, this involves plotting data points on a X/Y axis (in our example cited above, vehicle weight and fatal accidents) looking for the average causal effect. This means looking at how the graph’s dots are distributed and establishing a trend line. Again, correlation isn’t necessarily causation.
  • The correlation coefficient is a measure of linear association or clustering around a line.
  • While causation is sometimes easy to prove, frequently it can often be difficult because of confounding variables(unknown factors that affect the two variables being studied). Studies require well-designed and executed experiments to ensure that the results are reliable.
  • When causation has been established, the factor that drives change (in the above example, sunlight) is theindependent variable. The variable that is driven is the dependent variable.
  • Elasticity, a term frequently used in economics studies, measures how much a change in one variable affects another. For example, if the price of vegetables rises 10% and consumers respond by cutting back purchases by 10%, the expenditure elasticity is 1.0 — the increase in price equals the drop in consumption. But if purchases fall by 15%, the elasticity is 1.5, and consumers are said to be “price sensitive” for that item. If consumption were to fall only 5%, the elasticity is 0.5 and consumers are “price insensitive” — a rise in price of a certain amount doesn’t reduce purchases to the same degree.
  • Standard deviation provides insight into how much variation there is within a group of values. It measures the deviation (difference) from the group’s mean (average).
  • Be careful to distinguish the following terms as you interpret results: Average, mean and median. The first two terms are synonymous, and refer to the average value of a group of numbers. Add up all the figures, divide by the number of values, and that’s the average or mean. A median, on the other hand, is the central value, and can be useful if there’s an extremely high or low value in a collection of values — say, a Wall Street CEO’s salary in a list of people’s incomes. (For more information, read “Math for Journalists” or go to one of the “related resources” at right.)
  • Pay close attention to percentages versus percentage points — they’re not the same thing. For example, if 40 out of 100 homes in a distressed suburb have “underwater” mortgages, the rate is 40%. If a new law allows 10 homeowners to refinance, now only 30 mortgages are troubled. The new rate is 30%, a drop of 10 percentage points (40 – 30 = 10). This is not 10% less than the old rate, however — in fact, the decrease is 25% (10 / 40 = 0.25 = 25%).
  • In descriptive statistics, quantiles can be used to divide data into equal-sized subsets. For example, dividing a list of individuals sorted by height into two parts — the tallest and the shortest — results in two quantiles, with the median height value as the dividing line.  Quartiles separate data set into four equal-sized groups, deciles into 10 groups, and so forth. Individual items can be described as being “in the upper decile,” meaning the group with the largest values, meaning that they are higher than 90% of those in the dataset.

Note that understanding statistical terms isn’t a license to freely salt your stories with them. Always work to present studies’ key findings in clear, jargon-free language. You’ll be doing a service not only for your readers, but also for the researchers.

Related: See this more general overview of academic theory and critical reasoning courtesy of MIT’s Stephen Van Evera. A new open, online course offered on Harvard’s EdX platform, “Introduction to Statistics: Inference,” from UC Berkeley professors, explores “statistical ideas and methods commonly used to make valid conclusions based on data from random samples.”

There are also a number of free online statistics tutorials available, including one from Stat Trek and another from Experiment Resources. Stat Trek also offer a glossary that provides definitions of common statistical terms. Another useful resource is “Harnessing the Power of Statistics,” a chapter in The New Precision Journalism.

Original article here.


standard

Intel targets IoT machine vision firm Movidius

2016-09-06 - By 

Intel has moved to buy machine vision technology developer Movidius.

The attraction to Intel is Movidius capability to add low power vision process to IoT-enabled device and autonomous machines.

Movidius has European design centres in Dublin and Romania.

Josh Walden, general manager of Intel’s New Technology Group, writes:

“The ability to track, navigate, map and recognize both scenes and objects using Movidius’ low-power and high-performance SoCs opens opportunities in areas where heat, battery life and form factors are key. Specifically, we will look to deploy the technology across our efforts in augmented, virtual and merged reality (AR/VR/MR), drones, robotics, digital security cameras and beyond.”

Its Myriad 2 family of Vision Processor Units (VPUs) are based on a sub-1W processing architecture, backed by a memory subsystem capable of feeding the processor array as well as hardware acceleration to support large-scale operations.

Remi El-Ouazzane, CEO of Movidius, writes:

“As part of Intel, we’ll remain focused on this mission, but with the technology and resources to innovate faster and execute at scale. We will continue to operate with the same eagerness to invent and the same customer-focus attitude that we’re known for, and we will retain Movidius talent and the start-up mentality that we have demonstrated over the years.”

Its customers include DJI, FLIR, Google and Lenovo which use its IP and devices in drones, security cameras, AR/VR headsets.

“When computers can see, they can become autonomous and that’s just the beginning. We’re on the cusp of big breakthroughs in artificial intelligence. In the years ahead, we’ll see new types of autonomous machines with more advanced capabilities as we make progress on one of the most difficult challenges of AI: getting our devices not just to see, but also to think,” said El-Ouazzane.

The terms of the deal have no been published.

Original article here.


standard

Why is there so much bullshit? (infographic)

2016-09-06 - By 

Did you ever wonder why you spend so much of your day wading through bullshit? Every worker must consume masses of information, but most of it is poorly written, impenetrable, and frustrating to consume. How did we get here?

I’ve actually studied this question. In fact, Chapter 2 of my book explains it in detail. Basically:

  • Reading on screens all day impairs our attention. According to Chartbeat, a person reading a news article online gives it an average of only 36 seconds of attention. Forrester Research reports that the only people who read more media in print than online are those 70 years old and older. It’s a noisy, jam-packed world of text that we all navigate, and that makes it harder for us to pay attention to what we read.
  • No one edits what we read. Compared to decades ago, most of what we consume is unedited. It’s first draft emails and self-created blog posts and Facebook updates. Even what passes for news these days gets a lot less editorial attention than it used to, and it shows.
  • We learned to write the wrong way. Our writing teachers have failed to prepare us for today’s business world. The sterile, formulaic five-paragraph theme of high school English gives way to the college professor who gives the best grades to the longest, wordiest papers. Writing for on-screen readers needs to be brief and pointed, but nobody teaches that in school — or at work.

Want to spread the word? Share the infographic below which puts it all together.

Blogging note: my new site design goes live today. Same content, different package. If you have design comments, please send them to me here rather than as comments on this post.

Original article here.


standard

Scientists look at how A.I. will change our lives by 2030

2016-09-05 - By 

Transportation, education, home security will all be affected by smart machines

By the year 2030, artificial intelligence (A.I.) will have changed the way we travel to work and to parties, how we take care of our health and how our kids are educated.

That’s the consensus from a panel of academic and technology experts taking part in Stanford University’s One Hundred Year Study on Artificial Intelligence.

Focused on trying to foresee the advances coming to A.I., as well as the ethical challenges they’ll bring, the panel yesterday released its first study.

The 28,000-word report, “Artificial Intelligence and Life in 2030,” looks at eight categories — from employment to healthcare, security, entertainment, education, service robots, transportation and poor communities — and tries to predict how smart technologies will affect urban life.

“We believe specialized A.I. applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts, said in a written statement. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of A.I. are broadly shared.”

A.I. has taken it on the chin in recent years, with industry figures like physicist Stephen Hawking and high-tech entrepreneur Elon Musk decrying the societal dangers of the technology.

Unlike Musk, who equated developing A.I. with summoning a demon, the A.I. report issued this week shows that scientists anticipate some problems but also numerous benefits with advancing the technology.

“A.I. technologies can be reliable and broadly beneficial,” said Barbara Grosz, a Harvard computer scientist and chair of the AI100 committee. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

In the study, researchers said that when it comes to A.I. and transportation, autonomous vehicles and even aerial delivery drones could change both travel and life patterns in cities. The study also notes that home service robots won’t just clean but will offer security, while smart sensors will monitor people’s blood sugar and organ functions, robotic tutors will augment human instruction and A.I. will lead to new ways to deliver media in more interactive ways.

And while the report also notes that A.I. could improve services like food distribution in poor neighborhoods and analyze crime patterns, it’s not all positive.

For instance, A.I. could lead to significant changes in the workforce as robots and smart systems take over base-level jobs, like moving inventory in warehouses, scheduling meetings and even offering some financial advice.

However, A.I. also will open up new jobs, such as data analysts who will be needed to make sense of all the new information computers are amassing.

The study noted that work should begin now figuring out how to help people adapt as the economy undergoes rapid changes and existing jobs are lost and new ones created.

“Until now, most of what is known about A.I. comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how A.I. technologies are likely to affect society.”

Original article here.

 

standard

Fujitsu now making DRAM killer with 1,000x performance boost

2016-09-04 - By 

But Nano-RAM faces limitations from the DDR4 interface, even as it promises limitless longevity

Fujistu Semiconductor Ltd. has become the first manufacturer to announce it is mass producing a new RAM that boasts 1,000 times the performance of DRAM but stores data like NAND flash memory.

The new non-volatile memory known as Nano-RAM (NRAM) was first announced last year and is based on carbon nanotube technology.

Fujitsu Semiconductor plans to develop a custom embedded storage-class memory module using the DDR4 interface by the end of 2018, with the goal of expanding the product line-up into a stand-alone NRAM product family from Fujitsu’s foundry, Mie Fujitsu Semiconductor Ltd.; the stand-alone memory module will be sold through resellers, who’ll rebrand it.

According to Nantero, the company that invented NRAM, seven fabrication plants in various parts of the world experimented with the new memory last year. And other as-yet unannounced chipmakers are already ramping up production behind the scenes.

Fujitsu plans to initially manufacture the NRAM using a 55-nanometer (nm) process, which refers to the size of the transistors used to store bits of data. At that size, the initial memory modules will only be able to store megabytes of data. However, the company also plans a next-generation 40nm-process NRAM version, according to Greg Schmergel, CEO of Nantero.

Initially, NRAM products will likely be aimed at the data center and servers. But over time they could find their way into the consumer market — and even into mobile devices. Because it uses power in femtoJoules (1015 of a Joule) and requires no data clean-up operations in the background, as NAND flash does, NRAM could extend the battery life of a mobile device in standby mode for months, Schmergel said.

Fujitsu has not specified whether its initial NRAM product will be produced as a DIMM (dual in-line memory module), but Schmergel said one of the other fabrication partners “is definitely doing just that…for a DDR4 compatible chip in product design.

“There are several others [fabricators] we are still working with, and one, for example, is focused on a 28nm process and that’s a multi-gigabyte stand-alone memory product,” Schmergel said, referring to the DIMM manufacturer.

Currently, NRAM is being produced as a planar memory product, meaning memory cells are laid out horizontally across a two-dimensional plane. However, just as the NAND flash industry has, Nantero is developing a three dimensional (3D) multilayer architecture that will greatly increase the memory’s density.

“We were forced to go into 3D multilayer technology maybe sooner than we realized because customers want those higher densities,” Schmergel said. “We expect densities will vary from fab to fab. Most of them will produce four to eight layers. We can do more than that. Nanotube technology is not the limiting factor.”

Currenty, NRAM can be produced for about half the cost of DRAM, Schmergel said, adding that with greater densities production costs will also shrink — just as they have for the NAND flash industry.

“My understanding is that Nantero plans to bring NRAM to the market as an embedded memory in MCUs and ASICs for the time being,” Jim Handy, principal analyst with semiconductor research firm Objective Analysis, said in an email reply to Computerworld. “This is a good strategy, since flash processes are having trouble keeping pace with the logic processes that are used to make MCUs and ASICs.

“An alternative technology like NRAM then stands a chance of getting into high volumes on the back of the MCU and ASIC markets,” Handy added. “After that, it could challenge DRAM, but it will have some trouble getting to cost parity with DRAM until its unit volume rises to a number close to that of DRAM.”

Should DRAM stop scaling, though, NRAM will encounter a big opportunity since it promises to scale at lower prices than DRAM will be able to reach, according to Handy.

Because of its potential to store increasingly more data as its density increases, NRAM could also someday replace NAND flash as the price to produce it drops along with  economies of scale, Schmergel said.

“We’re really focused in the next few years on competing with DRAM where costs don’t need to be as low as NAND flash,” Schmergel said.

One big advantage NRAM has over traditional flash memory is its endurance. Flash memory can only sustain a finite number of program/erase (P/E) cycles — typically around 5,000 to 8,000 per flash cell before the memory begins to fail. The best NAND flash, with error correction code and wear-leveling software, can withstand about 100,000 P/E cycles.

Carbon nanotubes are strong — very strong. In fact, they’re 50 times stronger than steel, and they’re only 1/50,000th the size a human hair. Because of carbon nanotubes’ strength, NRAM has far greater write endurance compared to NAND flash; the program/erase (P/E) cycles it can endure are practically infinite, according to Schmergel.

NRAM has been tested by Nantero to withstand 1012 P/E cycles and 1015 read cycles, Schmergel said.

In 2014, a team of researchers of at Chuo University in Tokyo tested Nantero’sNRAM tested it up to 1011 P/E (program/erase) cycles which represents more than one billion write cycles.

“We expect it to have unlimited endurance,” Schmergel said.

Another advantage is that NRAM is being built using the DDR4 specification interface, so it could sport up to 3.2 billion data transfers per second or 2,400 Mbps — more than twice as fast as NAND flash. Natively, however, the NRAM’s read/write capability is thousands of times faster than NAND flash, Schmergel said; the bottleneck is the computer BUS interface.

“Nanotube switch [states] in picoseconds — going off to on and on to off,” Schmergal said. A picosecond is one trillionth of a second.

Because the company designed the memory using the DDR4 interface, speeds will be limited by the bus interface; thus, it only has the potential to be 1,000 times faster than DRAM on a technical specification sheet.

Another advantage is that NRAM is resistant to extreme heat. It can withstand up to 300 degrees Celsius. Nantero claims its memory can last thousands of years at 85 degrees Celsius and has been tested at 300 degrees Celsius for 10 years. Not one bit of data was lost, the company claims.

How NRAM works

Carbon nanotubes are grown from catalyst particles, most commonly iron.

NRAM is made up of an interlocking fabric matrix of carbon nanotubes that can either be touching or slightly separated. Each NRAM “cell” or transistor is made up by a  network of the carbon nanotubes that exist between two metal electrodes. The memory acts the same way as other resistive non-volatile RAM technologies.

Carbon nanotubes that are not in contact with each other are in the high resistance state that represents the “off” or “0” state. When the carbon nanotube contact each other, they take on the low-resistance state of “on” or “1.”

In terms of new memories, NRAM is up against an abundant field of emerging  technologies that are expected to challenge NAND flash in speed, endurance and capacity, according to Handy.

For example, Ferroelectric RAM (FRAM) has shipped in high volume; IBM has developed Racetrack Memory; Intel, IBM and Numonyx have all producedPhase-Change Memory (PCM); Magnetoresistive Random-Access Memory(MRAM) has been under development since the 1990s; Hewlett-Packard and Hynix have been developing ReRAM, also called Memristor; and Infineon Technologies has been developing Conductive-Bridging RAM (CBRAM).

Another potential NRAM competitor, however, could be 3D XPoint memory, which will be released this year by development partners Intel and Micron.

Micron, which will market it under the name QuantX (and Intel under the name Octane), is targeting NAND flash because the technology is primarily a mass storage-class memory that, while slower, is cheaper to produce than DRAM and vastly faster than NAND.

“We’re at DRAM speed. We have far greater endurance,” Schmergel said.

“It should be superior to 3D XPoint, which wears and has a slower write than read cycle,” Handy said. “If this is true, and if its costs can be brought to something similar to DRAM’s costs, then it is positioned to replace DRAM. Cost is the big issue here though, since it takes very high unit volumes for prices to get close to those of DRAM.

“It’s a chicken-and-egg problem: Costs will drop once volumes get high enough, and volume will get high if the cost is competitive with DRAM costs.”

Original article here.


standard

Amazon Alexa support coming to LG’s SmartThinQ hub

2016-09-03 - By 

You’ll be able to add calendar items and play music, but there’s no smart home control yet.

When LG launched its SmartThinQ hub at CES this year, you couldn’t help but notice that it was a dead ringer for Amazon’s Echo but, well, dumber. That’s because the device could play music and control LG SmartThinQ appliances, but wouldn’t obey your voice commands like an Echo. However, LG has announced that that it will join Amazon rather than fighting it by adding support for the Echo’s Alexa voice assistant.

Amazon recently opened Alexa up to third-party companies, but the SmartThinQ hub will get a limited set of features to start with, according toCNET. While it will listen to your commands, let you play music and schedule events on a calendar, it won’t control lights, thermostats or other smart home devices like the Echo. That’s a bit of an odd shortcoming, considering that the SmartThinQ hub is part of LG’s SmartThinQ appliance family, so it’s specifically designed for smart home devices. Hopefully we’ll know more soon, but meanwhile, there’s still no release date or pricing for the SmartThinQ hub.


standard

How Cloud Computing Is Changing the Software Stack

2016-09-01 - By 

Are sites, applications, and IT infrastructures leaving the LAMP stack (Linux, Apache, MySQL, PHP) behind? How have the cloud and service-oriented, modular architectures facilitated the shift to a modern software stack?

As more engineers and startups are asking the question “Is the LAMP stack dead?”—on which the jury is still out—let’s take a look at “site modernization,” the rise of cloud-based services, and the other ever-changing building blocks of back-end technology.

From the LAMP Era to the Cloud

Stackshare.io recently published its findings about the most popular components in many tech companies’ software stacks these days—stacks that are better described as “ecosystems” due to their integrated, interconnected array of modular components, software-as-a-service (SaaS) providers, and open-source tools, many of which are cloud-based.

It’s an interesting shift. Traditional software stacks used to be pretty cut and dry. Acronyms like LAMP, WAMP, and MEAN neatly described a mix of onsite databases, servers, and operating systems built with server-side scripts and frameworks. When these systems grow too complex, though, the productivity they enable can be quickly eclipsed by the effort it takes to maintain them. This is up for debate, though, and anything that’s built well from the ground up should be sturdy and scalable. However, a more modular stack approach still prompted many to make the shift.

A shift in the software stack status quo?

For the last five or so years, the monolith, LAMP-style approach has come more into question whether it’s the best possible route. Companies are migrating data and servers to the cloud, opting for streamlined API-driven data exchange, and using SaaS and PaaS solutions as super-scalable ways to build applications. In addition, they’re turning to a diverse array of technologies that can be more easily customized and integrated with one another—mainly JavaScript libraries and frameworks—allowing companies to be more nimble, and less reliant on big stack architectures.

But modularity is not without its complexities, and it’s also not for everyone. SaaS, mobile, and cloud-computing companies are more likely to take a distributed approach, while financial, healthcare, big data, and e-commerce organizations are less likely to. With the right team, skills, and expectations, however, it can be a great fit.

New, scalable building blocks like Nginx, New Relic, Amazon EC2, and Redis are stealing the scene as tech teams work toward more modular, software-based ecosystems—and here are a few reasons why.

What are some of the key drivers of this shift?

1. Continuous deployment

What’s the benefit of continuous deployment? Shorter concept-to-market development cycles that allow businesses to give customers new features faster, or adjust to what’s happening with traffic.

It’s possible to continuously deploy with a monolith architecture, but certain organizations are finding this easier to do beyond a LAMP-style architecture. Having autonomous microservices allows companies to deploy in chunks continuously, without dependencies and the risk of one failure causing another related failure. Tools like GitHub, Amazon EC2, and Heroku allow teams to continuously deploy software, for example, in an Agile sprint-style workflow.

2. The cloud is creating a new foundation

Cloud providers have completely shaken up the LAMP paradigm. Providers like Amazon Web Services (AWS) are creating entirely new foundations with cloud-based modules that don’t require constant attention, upgrades, and fixes. Whereas stacks used to comprise a language (Perl, Python, or PHP), a database (MySQL), a server, operating system, application servers, and middleware, now there are cloud modules, APIs, and microservices taking their place.

3. Integration is simplified

Tools need to work together, and thanks to APIs and modular services, they can—and without a lot of hassle. Customer service platforms need to integrate with email and databases, automatically. Many of the new generation of software solutions not only work well together, they build on one another and can become incredibly powerful when paired up, for example, Salesforce’s integrated SaaS.

4. Elasticity and affordable scalability

Cloud-based servers, databases, email, and data processing allow companies to rapidly scale up—something you can learn more in this Intro to Cloud Bursting article. Rather than provision more hardware and more time (and space) that it takes to set that hardware up, companies can purchase more space in the cloud on demand. This makes it easier to ramp up data processing. AWS really excels here, and is a top choice for companies like Upwork, Netflix, Adobe and Comcast have built their stacks with its cloud-based tools.

For areas like customer service, testing, analytics, and big data processing, modular components and services also rise to the occasion when demand spikes.

5. Flexibility and customization

The beauty of many of these platforms is that they come ready to use out the box—but with lots of room to tweak things to suit your needs. Because the parts are autonomous, you also have the flexibility to mix and match your choice of technologies—whether those are different programming languages or frameworks and databases that are particularly well-suited to certain apps or projects.

Another thing many organizations love is the ability to swap out one component for another without a lot of back-end reengineering. It is possible to replace parts in a monolith architecture, but for companies that need to get systems up and running fast—and anticipate a spike in growth or a lack of resources—modular components make it easy to swap out one for another. Rather than trying to adapt legacy technology for new purposes, companies are beginning to build, deploy, and run applications in the cloud.

6. Real-time communication and collaboration

Everyone wants to stay connected and communicate—especially companies with distributed engineering teams. Apps that let companies communicate internally and share updates, information, and more are some of the most important parts of modern software stacks. Here’s where a chat app like HipChat comes in, and other software like Atlassian’s JIRA, Confluence, Google Apps, Trello, and Basecamp. Having tools like these helps keep everyone on the same page, no matter what time zone they’re in.

7. Divvying up work between larger teams and distributed teams

By moving architectures to distributed systems, it’s important to remember that the more complicated a system is, the more a team will have to keep up with a new set of challenges: things that come along with cloud-based systems like failures, eventual consistency, and monitoring. Moving away from the LAMP-style stack is as much a technical change as it is a cultural one; be sure you’re engaging MEAN stack engineers and DevOps professionals who are skilled with this new breed of stack.

So what are the main platforms shaking up the stack landscape?

The Stackshare study dubbed this new generation of tech companies leaving LAMP behind as “GECS companies”—named for their predominant use of GitHub, Amazon EC2, and Slack, although there are many same-but-different tools like these three platforms.

Upwork has moved its stack to AWS, a shift that the Upwork engineering team is documenting on the Upwork blog. These new platforms offer startups and other businesses more democratization of options—with platforms, cloud-based servers, programming languages, and frameworks that can be combined to suit their specific needs.

  • JavaScript: JavaScript is the biggest piece of the new, post-LAMP pie. Think of it as the replacement for the “P” (PHP) in LAMP. It’s a front-end scripting language, but it’s so much more—it’s a stack-changer. JavaScript is powerful for both the front-end and back-end, thanks to Node.js, and is even outpacing some mobile technologies. Where stacks were once more varied between client and server, JavaScript is creating a more fluid, homogeneous stack, with a multitude of frameworks like Backbone, Express, Koa, Meteor, React, and Angular.
  • Ruby and Python also dominate the new back-end stack, along with Node.js.
  • Amazon Web Services (AWS): The AWS cloud-based suite of products is the new foundation for many organizations, offering everything from databases and developer tools to analytics, mobile and IoT support, and networking.
  • Computing platforms: Amazon EC2, Heroku, and Microsoft Azure
  • Databases: PostgreSQL, with some MongoDB and MySQL.

The good news? There’s no shortage of Amazon Web Services pros, freelance DevOps engineers, and freelance data scientists who are skilled in all of these newer platforms and technologies and poised to help companies get new, cloud-based stacks up and running.

Read more at http://www.business2community.com/brandviews/upwork/cloud-computing-changing-software-stack-01644544#kEgMIdXIW7Q0ZpOt.99


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);