Posted On:July 2017 - AppFerret

standard

Python Tops 2017’s Most Popular Programming Languages

2017-07-26 - By 

Trying to decide which programming languages to study, whether prior to college, during it, or in continuing professional development can have a significant impact on your employment prospects and opportunities thereafter. Given this, periodic efforts have been made to rank the most important and popular languages over time, to give more insight into where’s the best place to focus one’s efforts.

IEEE Spectrum has just put together its fourth interactive list of top programming languages. The group designed the list to allow users to weight their own interests and use-cases independently. You can access the full list and sort it by language type (Web, Mobile, Enterprise, Embedded), fastest growing markets, general trends in usage, and languages popular specifically for open source development. You can also implement your own customized sorting methods.

Programming language rankings and image by IEEE Spectrum

Python has been rising for the past few years, but last year it was as far back as #3, whereas this year, it wins overall with a rank of 100. Python, C, Java, and C++ round out the top four, with all well above 95, while the fifth place contestant, C# (Microsoft’s own language, developed as part of its .NET framework) sits at a solid 88.6. The drop-off in spots #5-10 is never as large as the gap between C++ and C#, and the tenth language, Apple’s Swift, makes the list for the first time at 75.3 overall rank.

Previously popular languages like Ruby have fallen dramatically, which is part of why Swift has had the opportunity to rise. Apple’s predecessor language to Swift, Object-C, has fallen to 26th place as Apple transitions itself and developers over to the newer language.

The rankings do change somewhat, depending on your market segment. In Embedded, for example, the top five ranks are occupied by C, C++, Arduino, Assembly, and Haskell. In Mobile, the Top 5 are C, Java, C++, C#, and JavaScript. For web development, the Top 5 are Python, Java, C#, JavaScript, and PHP.

How you adjust the languages and focus your criteria, in other words, leads to a fairly different distribution of languages. But while Python may have been IEEE’s overall top choice, it’s not necessarily the best choice if you’re trying to cover a lot of bases or hit broad targets. At least one variant of C is present in the Top 5 of every single category, and multiple categories have C, C++, and C# present in three of the Top 5 (the Web category is anomalous in this regard, as only C# makes it into the Top 5).

IEEE continues to refine its criteria and measurements and has applied these new weightings to the previous year’s results as well. If you want more information on how the company weights data or to see how languages compare year-on-year, all such information is available here.

Original article here.


standard

Gartner’s Hype Cycle: AI for Marketing

2017-07-24 - By 

Gartner’s 2017 Hype Cycle for Marketing and Advertising is out (subscription required) and, predictably, AI for Marketing has appeared as a new dot making a rapid ascent toward the Peak of Inflated Expectations. I say “rapid” but some may be surprised to see us projecting that it will take more than 10 years for AI in Marketing to reach the Plateau of Productivity. Indeed, the timeframe drew some skepticism and we deliberated on this extensively, as have many organizations and communities.

AI for Marketing on the 2017 Hype Cycle for Marketing and Advertising

First, let’s be clear about one thing: a long journey to the plateau is not a recommendation to ignore a transformational technology. However, it does raise questions of just what to expect in the nearer term.

Skeptics of a longer timeframe rightly point out the velocity with which digital leaders from Google to Amazon to Baidu and Alibaba are embracing these technologies today, and the impact they’re likely to have on marketing and advertising once they’ve cracked the code on predicting buying behavior and customer satisfaction and acting accordingly.

There’s no point in debating the seriousness of the leading digital companies when it comes to AI. The impact that AI will have on marketing is perhaps more debatable – some breakthrough benefits are already being realized, but – to use some AI jargon here – many problems at the heart of marketing exhibit high enough dimensionality to suggest they’re AI-complete. In other words, human behavior is influenced by a large number of variables which makes it hard to predict unless you’re human. On the other hand, we’ve seen dramatic lifts in conversion rates from AI-enhanced campaigns and the global scale of markets means that even modest improvements in matching people with products could have major effects. Net-net, we do believe AI that will have a transformational on marketing and that some of these transformational effects will be felt in fewer than ten years – in fact, they’re being felt already.

Still, in the words of Paul Saffo, “Never mistake a clear view for a short distance.” The magnitude of a technology’s impact is, if anything, a sign it will take longer than expected to reach some sort of equilibrium. Just look at the Internet. I still vividly recall the collective expectation that many of us held in 1999 that business productivity was just around the corner. The ensuing descent into the Trough of Disillusionment didn’t diminish the Internet’s ultimate impact – it just delayed it. But the delay was significant enough to give a few companies that kept the faith, like Google and Amazon, an insurmountable advantage when Internet at last plateaued, about 10 years later.

Proponents of faster impact point out that AI has already been through a Trough of Disillusionment maybe ten times as long as the Internet – the “AI Winter” that you can trace to the 1980s. By this reckoning, productivity is long overdue. This may be true for a number of domains – such as natural language processing and image recognition – but it’s hardly the case for the kinds of applications we’re considering in AI for Marketing. Before we could start on those we needed massive data collection on the input side, a cloud-based big data machine learning infrastructure, and real-time operations on the output side to accelerate the learning process to the point where we could start to frame the optimization problem in AI. Some of the algorithms may be quite old, but their real-time marketing context is certainly new.

More importantly, consider the implications of replacing the way marketing works today with true lights-out AI-driven operations. Even when machines do outperform human counterparts in making the kinds of judgments marketers pride themselves on, the organizational and cultural resistance they will face from the enterprise is profound….with important exceptions: disruptive start-ups and the digital giants who are developing these technologies and currently dominate digital media.

And enterprises aren’t the only source of resistance. The data being collected in what’s being billed as “people-based marketing” – the kind that AI will need to predict and influence behavior – is the subject of privacy concerns that stem from the “people’s” notable lack of an AI ally in the data collection business. See more comments here.

Then consider this: In 2016, P&G spent over $4B media. Despite their acknowledgment of the growing importance of the Internet to their marketing (20 years in), they still spend orders of magnitude more on TV (see Ad Age, ubscription required). As we know, Marc Pritchard, P&G’s global head of brands, doesn’t care much for the Internet’s way of doing business and has demanded fundamental changes in what he calls its “corrupt and non-transparent media supply chain.”

Well, if Marc and his colleagues don’t like the Internet’s media supply chain, wait until they get a load of the emerging AI marketing supply chain. Here’s a market where the same small group of gatekeepers own the technology, the data, the media, the infrastructure – even some key physical distribution channels – and their business models are generally based on extracting payment from suppliers, not consumers who enjoy their services for “free.” The business impulses of these companies are clear: just ask Alexa. What they haven’t perfected yet is that shopping concierge that gets you exactly what you want, but they’re working on it. If their AI can solve that, then two of P&G’s most valuable assets – its legacy media-based brand power and its retail distribution network – will be neutralized. Does this mean the end of consumer brands? Not necessarily, but our future AI proxies may help us cultivate different ideas about brand loyalty.

This brings us to the final argument against putting AI for Marketing too far out on the hype cycle: it will encourage complacency in companies that need to act. By the time established brands recognize what’s happened, it will be too late.

Business leaders have told me they use Gartner’s Hype Cycles in two ways. One is to help build urgency behind initiatives that are forecast to have a large, near-term impact, especially ones tarnished by disillusionment. The second is to subdue people who insist on drawing attention to seductive technologies on the distant horizon. Neither use is appropriate for AI for Marketing. In this case, the long horizon is neither a cause for contentment nor is a reason to go shopping.

First, brands need a plan. And the plan has to anticipate major disruptions, not just in marketing, but in the entire consumer-driven, AI-mediated supply chain in which brands – or their AI agents – will find themselves negotiating with a lot of very smart algorithms. I feel confident in predicting that this will take a long time. But that doesn’t mean it’s time to ignore AI. On the contrary, it’s time to put learning about and experiencing AI at the top of the strategic priority list, and to consider what role your organization will play when these technologies are woven into our markets and brand experiences.

Original article here.


standard

Kaspersky Lab and Russian Intelligence FSB

2017-07-17 - By 

Yesterday, the Trump Administration released a statement indicating that Kaspersky Lab, one of the largest security companies in the world, would no longer be allowed to sell its products or services to the federal government. At the time, it wasn’t clear why the government had taken this step, and the CEO of Kaspersky Lab, Eugene Kaspersky, has strenuously argued that his company is being treated as a pawn in a game of chess between the US and Russia.

Kaspersky told ABC News that any concerns about his product were based in “ungrounded speculation and all sorts of other made-up things,” before adding that he and his company “have no ties to any government, and we have never helped nor will help any government in the world with their cyberespionage efforts.”

Now last claim looks particularly dubious. According to emails obtained by Bloomberg Businessweek (and confirmed by Kaspersky Lab as genuine), Kaspersky’s ties to the Russian FSB (the successor to the KGB) are much tighter than have previously been reported. It has allegedly worked with the government to develop security software and worked on joint projects that “the CEO knew would be embarrassing if made public.”

It’s common — in fact, it’s practically essential — for security firms to work closely with their own governments, both in terms of providing security solutions and in actively monitoring for threats or suspicious activity. But there’s a difference between working with the federal government of your nation and acting as an agent working on behalf of that government. These leaked emails, seem to show the company slipping over that line.

The first part of the described project was a contract to build a better DDoS defense system that could be used by both the Russian government and other Kaspersky clients. Nothing unusual about that. But Kaspersky went farther, and agreed to some extremely unusual conditions. According to ABC News’ report, Kaspersky wrote that the project contained technology to protect against filter attacks, as well as implementing what researchers call “Active Countermeasures.”

But there’s more to the story. Kaspersky also provided the FSB with real-time intelligence on the hackers location and and sends experts to accompany the FSB on its investigations and raids. ABC’s source described the situation as, “They weren’t just hacking the hackers; they were banging on the doors.”

Certain members of Congress and US government intelligence agencies have both warned against using Kaspersky Lab in any sensitive government or business setting. This could easily explain why. Installing software that can phone home to a company affiliated with the FSB could be a major problem should hackers come calling. Kaspersky also sells a secure operating system, KasperskyOS, designed to run on critical infrastructure, factories, pipelines, and even self-driving cars. The US Defense Intelligent Agency has reportedly circulated internal memos warning of the risks of using Kaspersky’s system, even as the company continues to deny that any connection between itself and Russia actually exists.

One More Thing…

Some will argue that this is mere political theater. After all, didn’t AT&T, Yahoo, Microsoft, Google, and a number of other companies comply with onerous requests made in dubious circumstances from the NSA and FBI? The answer, of course, is yes. But there are meaningful differences here: To the best of our knowledge, no one from Microsoft or AT&T ever did a ride-along on a raid to capture a suspect. It’s also a fact that more than one company fought hard against being forced to provide such evidence, capitulating only when all of the court cases and appeals had failed.

There may not be much practical difference between the end product delivered by a company that takes a job willingly and one that takes it only under duress, but there is a moral difference. Whether its Tim Cook going to court to protect user privacy or Google promptly encrypting all of its traffic, including within the data center, more than a few US companies have taken (or tried to take) strong stances against such spying. That doesn’t make them perfect. It may not even make them worthy of praise. But it does highlight a meaningful difference between what happened in Russia and what’s happened in the United States.

Original article here.

 


standard

Big Data Analytics in Healthcare: Fuelled by Wearables and Apps

2017-07-11 - By 

Driven by specialised analytics systems and software, big data analytics has decreased the time required to double medical knowledge by half, thus compressing healthcare innovation cycle period, shows the much discussed Mary Meeker study titled Internet Trends 2017.

The presentation of the study is seen as an evidence of the proverbial big data-enabled revolution, that was predicted by experts like McKinsey and Company. “A big data revolution is under way in health care. Over the last decade pharmaceutical companies have been aggregating years of research and development data into medical data bases, while payors and providers have digitised their patient records,” the McKinsey report had said four years ago.

The Mary Meeker study shows that in the 1980s it took seven years to double medical knowledge which has been decreased to only 3.5 years after 2010, on account of massive use of big data analytics in healthcare. Though most of the samples used in the study were US based, the global trends revealed in it are well visible in India too.

“Medicine and underlying biology is now becoming a data-driven science where large amounts of structured and unstructured data relating to biological systems and human health is being generated,” says Dr Rohit Gupta of MedGenome, a genomics driven research and diagnostics company based in Bengaluru.

Dr Gupta told Firstpost that big data analytics has made it possible for MedGenome, which focuses on improving global health by decoding genetic information contained in an individual genome, to dive deeper into genetics research.

“While any individual’s genome information is useful for detecting the known mutations for diseases, underlying new patterns of complicated diseases and their progression requires genomics data from many individuals across populations — sometimes several thousands to even few millions amounting to exabytes of information,” he said.

All of which would have been a cumbersome process without the latest data analytics tools that big data analytics has brought forth.

The company that started work on building India-specific baseline data to develop more accurate gene-based diagnostic testing kits in the year 2015 now conducts 400 genetic tests across all key disease areas.

What is Big Data

According to Mitali Mukerji, senior principal scientist, Council of Scientific and Industrial Research when a large number of people and institutions digitally record health data either in health apps or in digitised clinics, these information become big data about health. The data acquired from these sources can be analysed to search for patterns or trends enabling a deeper insight into the health conditions for early actionable interventions.

Big data is growing bigger
But big data analytics require big data. And proliferation of Information technology in the health sector has enhanced flow of big data exponentially from various sources like dedicated wearable health gadgets like fitness trackers and hospital data base. Big data collection in the health sector has also been made possible because of the proliferation of smartphones and health apps.

The Meeker study shows that the download of health apps have increased worldwide in 2016 to nearly 1,200 million from nearly 1,150 million in the last year and 36 percent of these apps belong to the fitness and 24 percent to the diseases and treatment ones.

Health apps help the users monitor their health. From watching calorie intake to fitness training — the apps have every assistance required to maintain one’s health. 7 minute workout, a health app with three million users helps one get that flat tummy, lose weight and strengthen the core with 12 different exercises. Fooducate, another app, helps keep track of what one eats. This app not only counts the calories one is consuming, but also shows the user a detailed breakdown of the nutrition present in a packaged food.

For Indian users, there’s Healthifyme, which comes with a comprehensive database of more than 20,000 Indian foods. It also offers an on-demand fitness trainer, yoga instructor and dietician. With this app, one can set goals to lose weight and track their food and activity. There are also companies like GOQii, which provide Indian customers with subscription-based health and fitness services on their smartphones using fitness trackers that come free.

Dr Gupta of MedGenome explains that data accumulated in wearable devices can either be sent directly to the healthcare provider for any possible intervention or even predict possible hospitalisation in the next few days.

The Meeker study shows that global shipment of wearable gadgets grew from 26 million in 2014 to 102 million in 2016.

Another area that’s shown growth is electronic health records. In the US, electronic health records in office-based physicians in United States have soared from 21 percent in 2004 to 87 percent in 2015. In fact, every hospital with 500 beds (in the US) generate 50 petabytes of health data.

Back home, the Ministry of Electronics and Information Technology, Government of India, runs Aadhar-based Online Registration System, a platform to help patients book appointments in major government hospitals. The portal has the potential to emerge into a source if big data offering insights on diseases, age groups, shortcomings in hospitals and areas to improve. The website claims to have already been used to make 8,77,054 appointments till date in 118 hospitals.

On account of permeation of digital technology in health care, data growth has recorded 48% growth year on year, the Meeker study says. The accumulated mass of data, according to it, has provided deeper insights in health conditions. The study shows drastic increase of citations from 5 million in 1977 to 27 million in 2017. Easy access to big data has ensured that scientists can now direct their investigations following patterns analysed from such information and less time is required to arrive at conclusion.

“If a researcher has huge sets of data at his disposal, he/she can also find out patterns and simulate it through machine learning tools, which decreases the time required to arrive at a conclusion. Machine learning methods become more robust when they are fed with results analysed from big data,” says Mukerji.

She further adds, “These data simulation models, rely on primary information generated from a study to build predictive models that can help assess how human body would respond to a given perturbation,” says Mukerji.

The Meeker also study shows that Archimedes data simulation models can conduct clinical trials from data related to 50,000 patients collected over a period of 30 years, in just a span of two months. In absence of this model it took seven years to conduct clinical trials on data related to 2,838 patients collected over a period of seven years.

As per this report in 2016 results of 25,400 number of clinical trial was publically available against 1,900 in 2009.

The study also shows that data simulation models used by laboratories have drastically decreased time required for clinical trials. Due to emergence of big data, rise in number of publically available clinical trials have also increased, it adds.

Big data in scientific research

The developments grown around big-data in healthcare has broken the silos in scientific research. For example, the field of genomics has taken a giant stride in evolving personalised and genetic medicine with the help of big data.

A good example of how big data analytics can help modern medicine is the Human Genome Project and the innumerous researches on genetics, which paved way for personalised medicine, would have been difficult without the democratisation of data, which is another boon of big data analytics. The study shows that in the year 2008 there were only 5 personalised medicines available and it has increased to 132 in the year 2016.

In India, a Bangalore-based integrated biotech company recently launched ‘Avestagenome’, a project to build a complete genetic, genealogical and medical database of the Parsi community. Avestha Gengraine Technologies (Avesthagen), which launched the project believes that the results from the Parsi genome project could result in disease prediction and accelerate the development of new therapies and diagnostics both within the community as well as outside.

MedGenome has also been working on the same direction. “We collaborate with leading hospitals and research institutions to collect samples with research consent, generate sequencing data in our labs and analyse it along with clinical data to discover new mutations and disease causing perturbations in genes or functional pathways. The resultant disease models and their predictions will become more accurate as and when more data becomes available.”

Mukerji says that democratisation of data fuelled by proliferation of technology and big data has also democratised scientific research across geographical boundaries. “Since data has been made easily accessible, any laboratory can now proceed with research,” says Mukerji.

“We only need to ensure that our efforts and resources are put in the right direction,” she adds.

Challenges with big data

But Dr Gupta warns that big-data in itself does not guarantee reliability for collecting quality data is a difficult task.

Moreover, he said, “In medicine and clinical genomics, domain knowledge often helps and is almost essential to not only understand but also finding ways to effectively use the knowledge derived from the data and bring meaningful insights from it.”

Besides, big data gathering is heavily dependent on adaptation of digital health solutions, which further restricts the data to certain age groups. As per the Meeker report, 40 percent of millennial respondents covered in the study owned a wearable. On the other hand 26 percent and 10 percent of the Generation X and baby boomers, respectively, owned wearables.

Similarly, 48 percent millennials, 38 percent Generation X and 23 percent baby boomers go online to find a physician. The report also shows that 10 percent of the people using telemedicine and wearable proved themselves super adopters of the new healthcare technology in 2016 as compared to 2 percent in 2015.
Collection of big data.

Every technology brings its own challenges, with big data analytics secure storage and collection of data without violating the privacy of research subjects, is an added challenge. Something, even the Meeker study does not answer.

“Digital world is really scary,” says Mukerji.

“Though we try to secure our data with passwords in our devices, but someone somewhere has always access to it,” she says.

The health apps which are downloaded in mobile phones often become the source of big-data not only for the company that has produced it but also to the other agencies which are hunting for data in the internet. “We often click various options while browsing internet and thus knowingly or unknowingly give a third party access to some data stored in the device or in the health app,” she adds.

Dimiter V Dimitrov a health expert makes similar assertions in his report, ‘Medical Internet of Things and Big Data in Healthcare‘. He reports that even wearables often have a server which they interact to in a different language providing it with required information.

“Although many devices now have sensors to collect data, they often talk with the server in their own language,” he said in his report.

Even though the industry is still at a nascent stage, and privacy remains a concern, Mukerji says that agencies possessing health data can certainly share them with laboratories without disclosing patient identity.

Original article here.

 


standard

The Internet is Important to Everyone (Infographic)

2017-07-10 - By 

The Internet is not a luxury.  The Internet is important to everybody – Individuals, Companies, Governments and institutions of all kinds – 100%.  This infographic shows the relationship of each group to the Internet and how some groups are being left behind.

Full Infographic:

Original infographic here.


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);