The more inclusive you are to the needs of your users, the more accessible your design is. Let’s take a closer look at the different lenses of accessibility through which you can refine your designs.
By Steven Lambert
“Accessibility is solved at the design stage.” This is a phrase that Daniel Na and his team heard over and over again while attending a conference. To design for accessibility means to be inclusive to the needs of your users. This includes your target users, users outside of your target demographic, users with disabilities, and even users from different cultures and countries. Understanding those needs is the key to crafting better and more accessible experiences for them.
One of the most common problems when designing for accessibility is knowing what needs you should design for. It’s not that we intentionally design to exclude users, it’s just that “we don’t know what we don’t know.” So, when it comes to accessibility, there’s a lot to know.
How do we go about understanding the myriad of users and their needs? How can we ensure that their needs are met in our design? To answer these questions, I have found that it is helpful to apply a critical analysis technique of viewing a design through different lenses.
“Good [accessible] design happens when you view your [design] from many different perspectives, or lenses.”
A lens is “a narrowed filter through which a topic can be considered or examined.” Often used to examine works of art, literature, or film, lenses ask us to leave behind our worldview and instead view the world through a different context.
For example, viewing art through a lens of history asks us to understand the “social, political, economic, cultural, and/or intellectual climate of the time.” This allows us to better understand what world influences affected the artist and how that shaped the artwork and its message.
Accessibility lenses are a filter that we can use to understand how different aspects of the design affect the needs of the users. Each lens presents a set of questions to ask yourself throughout the design process. By using these lenses, you will become more inclusive to the needs of your users, allowing you to design a more accessible user experience for all.
The Lenses of Accessibility are:
Lens of Animation and Effects
Lens of Audio and Video
Lens of Color
Lens of Controls
Lens of Font
Lens of Images and Icons
Lens of Keyboard
Lens of Layout
Lens of Material Honesty
Lens of Readability
Lens of Structure
Lens of Time
You should know that not every lens will apply to every design. While some can apply to every design, others are more situational. What works best in one design may not work for another.
The questions provided by each lens are merely a tool to help you understand what problems may arise. As always, you should test your design with users to ensure it’s usable and accessible to them.
Lens Of Animation And Effects
Effective animations can help bring a page and brand to life, guide the users focus, and help orient a user. But animations are a double-edged sword. Not only can misusing animations cause confusion or be distracting, but they can also be potentially deadly for some users.
Fast flashing effects (defined as flashing more than three times a second) or high-intensity effects and patterns can cause seizures, known as ‘photosensitive epilepsy.’ Photosensitivity can also cause headaches, nausea, and dizziness. Users with photosensitive epilepsy have to be very careful when using the web as they never know when something might cause a seizure.
Other effects, such as parallax or motion effects, can cause some users to feel dizzy or experience vertigo due to vestibular sensitivity. The vestibular system controls a person’s balance and sense of motion. When this system doesn’t function as it should, it causes dizziness and nausea.
“Imagine a world where your internal gyroscope is not working properly. Very similar to being intoxicated, things seem to move of their own accord, your feet never quite seem to be stable underneath you, and your senses are moving faster or slower than your body.”
Constant animations or motion can also be distracting to users, especially to users who have difficulty concentrating. GIFs are notably problematic as our eyes are drawn towards movement, making it easy to be distracted by anything that updates or moves constantly.
This isn’t to say that animation is bad and you shouldn’t use it. Instead you should understand why you’re using the animation and how to design safer animations. Generally speaking, you should try to design animations that cover small distances, match direction and speed of other moving objects (including scroll), and are relatively small to the screen size.
To use the Lens of Animation and Effects, ask yourself these questions:
Are there any effects that could cause a seizure?
Are there any animations or effects that could cause dizziness or vertigo through use of motion?
Are there any animations that could be distracting by constantly moving, blinking, or auto-updating?
Is it possible to provide controls or options to stop, pause, hide, or change the frequency of any animations or effects?
Lens Of Audio And Video
Autoplaying videos and audio can be pretty annoying. Not only do they break a users concentration, but they also force the user to hunt down the offending media and mute or stop it. As a general rule, don’t autoplay media.
“Use autoplay sparingly. Autoplay can be a powerful engagement tool, but it can also annoy users if undesired sound is played or they perceive unnecessary resource usage (e.g. data, battery) as the result of unwanted video playback.”
You’re now probably asking, “But what if I autoplay the video in the background but keep it muted?” While using videos as backgrounds may be a growing trend in today’s web design, background videos suffer from the same problems as GIFs and constant moving animations: they can be distracting. As such, you should provide controls or options to pause or disable the video.
Along with controls, videos should have transcripts and/or subtitles so users can consume the content in a way that works best for them. Users who are visually impaired or who would rather read instead of watch the video need a transcript, while users who aren’t able to or don’t want to listen to the video need subtitles.
To use the Lens of Audio and Video, ask yourself these questions:
Are there any audio or video that could be annoying by autoplaying?
Is it possible to provide controls to stop, pause, or hide any audio or videos that autoplay?
Do videos have transcripts and/or subtitles?
Lens Of Color
Color plays an important part in a design. Colors evoke emotions, feelings, and ideas. Colors can also help strengthen a brand’s message and perception. Yet the power of colors is lost when a user can’t see them or perceives them differently.
Because colors and their meanings can be lost either through cultural differences or color blindness, you should always add a non-color identifier. Identifiers such as icons or text descriptions can help bridge cultural differences while patterns work well to distinguish between colors.
After years of being left for dead, SQL today is making a comeback. How come? And what effect will this have on the data community?
Since the dawn of computing, we have been collecting exponentially growing amounts of data, constantly asking more from our data storage, processing, and analysis technology. In the past decade, this caused software developers to cast aside SQL as a relic that couldn’t scale with these growing data volumes, leading to the rise of NoSQL: MapReduce and Bigtable, Cassandra, MongoDB, and more.
In this post we examine why the pendulum today is swinging back to SQL, and what this means for the future of the data engineering and analysis community.
Part 1: A New Hope
To understand why SQL is making a comeback, let’s start with why it was designed in the first place.
Our story starts at IBM Research in the early 1970s, where the relational database was born. At that time, query languages relied on complex mathematical logic and notation. Two newly minted PhDs, Donald Chamberlin and Raymond Boyce, were impressed by the relational data model but saw that the query language would be a major bottleneck to adoption. They set out to design a new query language that would be (in their own words): “more accessible to users without formal training in mathematics or computer programming.”
70 free data sources for 2017 on government, crime, health, financial and economic data, marketing and social media, journalism and media, real estate, company directory and review, and more to start working on your data projects.
Every great data visualization starts with good and clean data. Most of people believe that collecting big data would be a rough thing, but it’s simply not true. There are thousands of free data sets available online, ready to be analyzed and visualized by anyone. Here we’ve rounded up 70 free data sources for 2017 on government, crime, health, financial and economic data,marketing and social media, journalism and media, real estate, company directory and review, and more.
We hope you could enjoy this and save a lot time and energy searching blindly online.
Free Data Source: Government
Data.gov: It is the first stage and acts as a portal to all sorts of amazing information on everything from climate to crime freely by the US Government.
Data.gov.uk: There are datasets from all UK central departments and a number of other public sector and local authorities. It acts as a portal to all sorts of information on everything, including business and economy, crime and justice, defence, education, environment, government, health, society and transportation.
US. Census Bureau: The website is about the government-informed statistics on the lives of US citizens including population, economy, education, geography, and more.
The CIA World Factbook: Facts on every country in the world; focuses on history, government, population, economy, energy, geography, communications, transportation, military, and transnational issues of 267 countries.
Socrata: Socratais a mission-driven software company that is another interesting place to explore government-related data with some visualization tools built-in. Its data as a service has been adopted by more than 1200 government agencies for open data, performance management and data-driven government.
European Union Open Data Portal: It is the single point of access to a growing range of data from the institutions and other bodies of the European Union. The data boosts includes economic development within the EU and transparency within the EU institutions, including geographic, geopolitical and financial data, statistics, election results, legal acts, and data on crime, health, the environment, transport and scientific research. They could be reused in different databases and reports. And more, a variety of digital formats are available from the EU institutions and other EU bodies. The portal provides a standardised catalogue, a list of apps and web tools reusing these data, a SPARQL endpoint query editor and rest API access, and tips on how to make best use of the site.
Canada Open Datais a pilot project with many government and geospatial datasets. It could help you explore how the Government of Canada creates greater transparency, accountability, increases citizen engagement, and drives innovation and economic opportunities through open data, open information, and open dialogue.
Datacatalogs.org: It offers open government data from US, EU, Canada, CKAN, and more.
UK Data Service: The UK Data Service collection includes major UK government-sponsored surveys, cross-national surveys, longitudinal studies, UK census data, international aggregate, business data, and qualitative data.
Free Data Source: Crime
Uniform Crime Reporting: The UCR Program has been the starting place for law enforcement executives, students, researchers, members of the media, and the public seeking information on crime in the US.
FBI Crime Statistics: Statistical crime reports and publications detailing specific offenses and outlining trends to understand crime threats at both local and national levels.
Bureau of Justice Statistics: Information on anything related to U.S. justice system, including arrest-related deaths, census of jail inmates, national survey of DNA crime labs, surveys of law enforcement gang units, etc.
National Sex Offender Search: It is an unprecedented public safety resource that provides the public with access to sex offender data nationwide. It presents the most up-to-date information as provided by each Jurisdiction.
Free Data Source: Health
U.S. Food & Drug Administration: Here you will find a compressed data file of the Drugs@FDA database. Drugs@FDA, is updated daily, this data file is updated once per week, on Tuesday.
UNICEF: UNICEF gathers evidence on the situation of children and women around the world. The data sets include accurate, nationally representative data from household surveys and other sources.
Healthdata.gov: 125 years of US healthcare data including claim-level Medicare data, epidemiology and population statistics.
NHS Health and Social Care Information Centre: Health data sets from the UK National Health Service. The organization produces more than 260 official and national statistical publications. This includes national comparative data for secondary uses, developed from the long-running Hospital Episode Statistics which can help local decision makers to improve the quality and efficiency of frontline care.
Free Data Source: Financial and Economic Data
World Bank Open Data: Education statistics about everything from finances to service delivery indicators around the world.
IMF Economic Data: An incredibly useful source of information that includes global financial stability reports, regional economic reports, international financial statistics, exchange rates, directions of trade, and more.
UN Comtrade Database: Free access to detailed global trade data with visualizations. UN Comtrade is a repository of official international trade statistics and relevant analytical tables. All data is accessible through API.
Global Financial Data: With data on over 60,000 companies covering 300 years, Global Financial Data offers a unique source to analyze the twists and turns of the global economy.
Google Finance: Real-time stock quotes and charts, financial news, currency conversions, or tracked portfolios.
Google Public Data Explorer: Google’s Public Data Explorer provides public data and forecasts from a range of international organizations and academic institutions including the World Bank, OECD, Eurostat and the University of Denver. These can be displayed as line graphs, bar graphs, cross sectional plots or on maps.
U.S. Bureau of Economic Analysis: U.S. official macroeconomic and industry statistics, most notably reports about the gross domestic product (GDP) of the United States and its various units. They also provide information about personal income, corporate profits, and government spending in their National Income and Product Accounts (NIPAs).
Financial Data Finder at OSU: Plentiful links to anything related to finance, no matter how obscure, including World Development Indicators Online, World Bank Open Data, Global Financial Data, International Monetary Fund Statistical Databases, and EMIS Intelligence.
Financial Times: The Financial Times provides a broad range of information, news and services for the global business community.
Free Data Source: Marketing and Social Media
Amazon API: Browse Amazon Web Services’Public Data Sets by category for a huge wealth of information. Amazon API Gateway allows developers to securely connect mobile and web applications to APIs that run on Amazon Web(AWS) Lambda, Amazon EC2, or other publicly addressable web services that are hosted outside of AWS.
American Society of Travel Agents: ASTA is the world’s largest association of travel professionals. It provides members information including travel agents and the companies whose products they sell such as tours, cruises, hotels, car rentals, etc.
Social Mention: Social Mention is a social media search and analysis platform that aggregates user-generated content from across the universe into a single stream of information.
Google Trends: Google Trends shows how often a particular search-term is entered relative to the total search-volume across various regions of the world in various languages.
Facebook API: Learn how to publish to and retrieve data from Facebook using the Graph API.
Twitter API: The Twitter Platform connects your website or application with the worldwide conversation happening on Twitter.
Instagram API: The Instagram API Platform can be used to build non-automated, authentic, high-quality apps and services.
Foursquare API: The Foursquare API gives you access to our world-class places database and the ability to interact with Foursquare users and merchants.
HubSpot: A large repository of marketing data. You could find the latest marketing stats and trends here. It also provides tools for social media marketing, content management, web analytics, landing pages and search engine optimization.
Moz: Insights on SEO that includes keyword research, link building, site audits, and page optimization insights in order to help companies to have a better view of the position they have on search engines and how to improve their ranking.
The New York Times Developer Network– Search Times articles from 1851 to today, retrieving headlines, abstracts and links to associated multimedia. You can also search book reviews, NYC event listings, movie reviews, top stories with images and more.
Associated Press API: The AP Content API allows you to search and download content using your own editorial tools, without having to visit AP portals. It provides access to images from AP-owned, member-owned and third-party, and videos produced by AP and selected third-party.
Google Books Ngram Viewer: It is an online search engine that charts frequencies of any set of comma-delimited search strings using a yearly count of n-grams found in sources printed between 1500 and 2008 in Google’s text corpora.
Wikipedia Database: Wikipedia offers free copies of all available content to interested users.
FiveThirtyEight: It is a website that focuses on opinion poll analysis, politics, economics, and sports blogging. The data and code on Github is behind the stories and interactives at FiveThirtyEight.
Google Scholar: Google Scholar is a freely accessible web search engine that indexes the full text or metadata of scholarly literature across an array of publishing formats and disciplines. It includes most peer-reviewed online academic journals and books, conference papers, theses and dissertations, preprints, abstracts, technical reports, and other scholarly literature, including court opinions and patents.
Free Data Source: Real Estate
Castles: Castles are a successful, privately owned independent agency. Established in 1981, they offer a comprehensive service incorporating residential sales, letting and management, and surveys and valuations.
Realestate.com: RealEstate.com serves as the ultimate resource for first-time home buyers, offering easy-to-understand tools and expert advice at every stage in the process.
Gumtree: Gumtree is the first site for free classifieds ads in the UK. Buy and sell items, cars, properties, and find or offer jobs in your area is all available on the website.
James Hayward: It provides an innovative database approach to residential sales, lettings & management.
LinkedIn: LinkedIn is a business- and employment-oriented social networking service that operates via websites and mobile apps. It has 500 million members in 200 countries and you could find the business directory here.
OpenCorporates: OpenCorporates is the largest open database of companies and company data in the world, with in excess of 100 million companies in a similarly large number of jurisdictions. Our primary goal is to make information on companies more usable and more widely available for the public benefit, particularly to tackle the use of companies for criminal or anti-social purposes, for example corruption, money laundering and organised crime.
Yellowpages: The original source to find and connect with local plumbers, handymen, mechanics, attorneys, dentists, and more.
Craigslist: Craigslist is an American classified advertisements website with sections devoted to jobs, housing, personals, for sale, items wanted, services, community, gigs, résumés, and discussion forums.
GAF Master Elite Contractor: Founded in 1886, GAF has become North America’s largest manufacturer of commercial and residential roofing (Source: Fredonia Group study). Our success in growing the company to nearly $3 billion in sales has been a result of our relentless pursuit of quality, combined with industry-leading expertise and comprehensive roofing solutions. Jim Schnepper is the President of GAF, an operating subsidiary of Standard Industries. When you are looking to protect the things you treasure most, here are just some of the reasons why we believe you should choose GAF.
CertainTeed: You could find contractors, remodelers, installers or builders in the US or Canada on your residential or commercial project here.
Manta: Manta is one of the largest online resources that deliver products, services and educational opportunities. The Manta directory boasts millions of unique visitors every month who search comprehensive database for individual businesses, industry segments and geographic-specific listings.
Kansas Bar Association: Directory for lawyers. The Kansas Bar Association (KBA) was founded in 1882 as a voluntary association for dedicated legal professionals and has more than 7,000 members, including lawyers, judges, law students, and paralegals.
Free Data Source: Other Portal Websites
Capterra: Directory about business software and reviews.
Monster: Data source for jobs and career opportunities.
Glassdoor: Directory about jobs and information about inside scoop on companies with employee reviews, personalized salary tools, and more.
Some tasks are common to almost all users, though, regardless of subject area: data import, data wrangling and data visualization. The table below show my favorite go-to packages for one of these three tasks (plus a few miscellaneous ones tossed in). The package names in the table are clickable if you want more information. To find out more about a package once you’ve installed it, type help(package = "packagename") in your R console (of course substituting the actual package name ).
As Python has gained a lot of traction in the recent years in Data Science industry, I wanted to outline some of its most useful libraries for data scientists and engineers, based on recent experience.
And, since all of the libraries are open sourced, we have added commits, contributors count and other metrics from Github, which could be served as a proxy metrics for library popularity.
1. NumPy (Commits: 15980, Contributors: 522)
When starting to deal with the scientific task in Python, one inevitably comes for help to Python’s SciPy Stack, which is a collection of software specifically designed for scientific computing in Python (do not confuse with SciPy library, which is part of this stack, and the community around this stack). This way we want to start with a look at it. However, the stack is pretty vast, there is more than a dozen of libraries in it, and we want to put a focal point on the core packages (particularly the most essential ones).
The most fundamental package, around which the scientific computation stack is built, is NumPy (stands for Numerical Python). It provides an abundance of useful features for operations on n-arrays and matrices in Python. The library provides vectorization of mathematical operations on the NumPy array type, which ameliorates performance and accordingly speeds up the execution.
2. SciPy (Commits: 17213, Contributors: 489)
SciPy is a library of software for engineering and science. Again you need to understand the difference between SciPy Stack and SciPy Library. SciPy contains modules for linear algebra, optimization, integration, and statistics. The main functionality of SciPy library is built upon NumPy, and its arrays thus make substantial use of NumPy. It provides efficient numerical routines as numerical integration, optimization, and many others via its specific submodules. The functions in all submodules of SciPy are well documented — another coin in its pot.
Python caught up with R and (barely) overtook it; Deep Learning usage surges to 32%; RapidMiner remains top general Data Science platform; Five languages of Data Science.
The 18th annual KDnuggets Software Poll again got huge participation from analytics and data science community and vendors, attracting about 2,900 voters, almost exactly the same as last year. Here is the initial analysis, with more detailed results to be posted later.
Python, whose share has been growing faster than R for the last several years, has finally caught up with R, and (barely) overtook it, with 52.6% share vs 52.1% for R.
The biggest surprise is probably the phenomenal share of Deep Learning tools, now used by 32% of all respondents, while only 18% used DL in 2016 and 9% in 2015. Google Tensorflow rapidly became the leading Deep Learning platform with 20.2% share, up from only 6.8% in 2016 poll, and entered the top 10 tools.
RapidMiner remains the most popular general platform for data mining/data science, with about 33% share, almost exactly the same as in 2016.
We note that many vendors have encouraged their users to vote, but all vendors had equal chances, so this does not violate KDnuggets guidelines. We have not seen any bot voting or direct links to vote for only one tool this year.
Spark grew to about 23% and kept its place in top 10 ahead of Hadoop.
Besides TensorFlow, another new tool in the top tier is Anaconda, with 22% share.
Top Analytics/Data Science Tools
Fig 1: KDnuggets Analytics/Data Science 2017 Software Poll: top tools in 2017, and their share in the 2015-6 polls
Members of the IT Central Station community say that the most important factors to consider when choosing a data visualization product include dashboard customization, data analysis capabilities, and ease of use. Five of the top data visualization solutions on the market are Tableau, Sisense, Dundas BI, Qlik Sense, and SAP Lumira, according to online reviews by enterprise users in the IT Central Station community.
But what do enterprise users really think about some of these tools? Here, users give a shout-out for some of their favorite features, but also give the vendors a little tough love.
Editor’s note: These reviews of select data visualization tools come from the IT Central Station community. They are the opinions of the users and are based on their own experiences.
“The most valuable feature in Tableau Desktop developer version is the drag-and -drop feature for dimensions and measures. Parameters and action filters are also great.” — Akarsh A., manager of business intelligence at a tech services company
“The most important and valuable feature is the ability to merge any kind of data with your data set, even cloud-based data. It gives the business user the power to analyze something new with his own data sources.” — Oscar B., business intelligence specialist at a financial services firm
Room for improvement
“Tableau lacks machine learning algorithms that you can implement using R, SPSS Modeler, and Python. It has clustering and time-series forecasting abilities, which are helpful, but adding machine learning capabilities (like decision trees, CHAID analysis and K-means) would make this product perfect!” — Yali P., data analysis team leader at an internet service company
“I have difficulty working with many filters on the dashboards, and I’d like to see more options in the Histories section. QlikView makes better use of the dashboard filters.” — Luiz Henrique F., planning specialist at a communications service provider
“A facility to add custom code to the dashboard would be helpful, and there is no formatting option for individual filters.” — Sampath P., vice president of strategy, global delivery and operations at a tech services company
“We found the ease-of-use to be our primary factor for choosing Sisense. We have a client that changes their mind often and their business is moving very quickly. We needed to bring them a tool that could be learned and then deployed easily, without a lot of technical expertise.” — J. Matt, section editor of print software at a printing company
“Time to deployment was one of our most critical factors in choosing a BI vendor, and I am not sure you can get to deployment faster than Sisense. On day one, I was able to download it, connect two disparate data sources with multiple tables, and build meaningful dashboards.” — Richard E., product owner of business intelligence at a software R&D company
“It’s really user-friendly and fast. I can use the product during customer meetings, not only to show dashboards, but to create and process data analytics in real time. It’s perfect for consultant workshops where you can’t work with static dashboards. Also, because it has an HTML5 interface, it now looks very good when compared to BI solutions that haven’t evolved since Windows XP.” — Romain N., project manager at a manufacturing company
Room for improvement
“The application lacks a control of the exporting function. It gives you the ability to export both the dashboards and the widgets in two formats each, but the format of all the exports are not completely under the control of the administrators. Most of it is select a few options and hope it comes out looking professional. A reporting engine that allows the administrators to format a template used in the exporting options would go a long way.” — Eric Z., vice president of IT at a manufacturing company
“Better tracking of the targets for our representatives, quick overview of the market by product categories, supplier, customers, states, etc. We’d also like ultra-fast drill-down capabilities that allow us to find answers to ad-hoc business questions in a few seconds during business meetings.” — Olivier C., business analyst at a marketing services firm
“Although the plug-in support mitigates this, Sisense could use additional out-of-the-box visualizations. The maps feature could also be improved.” — Jared K., vice president of technology at a tech vendor
“Version control and the ability to roll backward and forward on a dashboard. Ability to drag and drop an Excel file onto the page and have it create a data source from which to visualize.” — Tom L., senior programmer/analyst at a tech services company
“The simplicity involved in generating dashboards has dramatically increased the number of dashboards we can get into the hands of users each month. Using easy drag-and-drop functionality, fast data discovery and powerful dashboard tools is allowing us to quickly give the staff, managers and decision makers the information they need to make informed decisions.” — Tom M., energy efficiency analyst at an energy company
“Dundas BI is an end-to-end solution. It has ETL, reporting and dashboarding capabilities on one platform. It can manage all ETL procedures on its own. It not only reduces the budget of the project, but it also is easy to collect data from different sources.” — Zafer M., managing partner at public broadcaster
Room for improvement
“The addition of funnel charts to the visualization options would be great. We have not been able to default a data grid to be collapsed by groups. This would be a big help for some dashboards requiring lots of details on the screen.” — Andy C., manager of business intelligence and data architecture at a tech services company
“On some of the user interfaces, such as the ‘join’ interface, it is not possible to cancel out of the screen without making any changes.” — Tom M., energy efficiency analyst at an energy company
“Its ease of use. No code is necessary to build a nice panel that works for desktops and mobile devices. You can also add customized graphics made by the Github community (branch.qlik.com)!” — Kleyn G., IT analyst at a government agency
“The possibility of generating in-memory insights using self-service data connections with any database. Also, creating beautiful dashboards and analysis to use in business presentations, gaining value with trusted and solid information generated by the ETL in-memory tool.” — Arthur K., BI specialist at an educational organizational
Room for improvement
“As of today, QlikView and Qlik Sense are only capable of storing the data results to a proprietary file. No other tool, outside of Qlik, is able to read these files.” — Nick R., senior programmer analyst at an energy/utilities company
“I miss some of the functions found in their previous product, QlikView, such as dynamic (written by a function) expression or sheet name labels, to create multi-language applications.” — BIExpert870, BI expert at a tech services company
“We like the idea of being able to tell a story about our data and do some ad hoc data mining. We could show the sales across territories and actually see which ones were performing better than other ones. It was kind of an eye-opener.” — Steve B., business intelligence analyst at a recreational services company
“The dynamic creation of dashboards is the key feature. It provides quick visualizations for reports. As it’s hard to explain our reports, sometimes management misses the point, so the visualization is a lot better than a table that we typically show. It’s the visualization that helps them understand what we report on.” — SrDirector260, senior director at a tech services company
“What impresses me most are the charts and graphs, maps, and integration with ESRI. What’s more, you can integrate with other third-party APIs and design your own charts.” — SeniorVP472, senior vice president and head of multicultural insights at a consulting firm
Room for improvement
“I would look at the geospatial part: it’s a little cumbersome, and it’s not as accurate.” — Steve B., business intelligence analyst at a recreational services company
“The Edge version of SAP Lumira still needs to be more user-friendly. It’s still in the initial stages, so we can expect more features in future releases.” — BizOpsAnalyst442, business operations analyst at a tech company
IBM announced that Watson Analytics, a breakthrough natural language-based cognitive service that can provide instant access to powerful predictive and visual analytic tools for businesses, is available in beta. See Vine(vine.co/v/Ov6uvi1m7lT) for a sneak peek now.
I’m pleased to announce that I have my access, and its amazing. Uploading raw CSV data in and playing with it as a great shortcut to finding insights. It works really well and really quickly.
IBM Watson Analytics automates the once time-consuming tasks such as data preparation, predictive analysis, and visual storytelling for business professionals. Offered as a cloud-based freemium service, all business users can now access Watson Analytics from any desktop or mobile device. Since being announced on September 16, more than 22,000 people have already registered for the beta. The Watson Analytics Community, a user group for sharing news, best practices, technical support and training, is also accessible starting today.
This news follows IBM’s recently announced global partnership with Twitter, which includes plans to offer Twitter data as part of IBM Watson Analytics.
Learn more about how IBM Watson Analytics works:
As part of its effort to equip all professionals with the tools needed to do their jobs better, Watson Analytics provides business professionals with a unified experience and natural language dialogue so they can better understand data and more quickly reach business goals. For example, a marketing, HR or sales rep can quickly source data, cleanse and refine it, discover insights, predict outcomes, visualize results, create reports and dashboards and explain results in familiar business terms.
Artificial intelligence, machine learning, and smart things promise an intelligent future.
Today, a digital stethoscope has the ability to record and store heartbeat and respiratory sounds. Tomorrow, the stethoscope could function as an “intelligent thing” by collecting a massive amount of such data, relating the data to diagnostic and treatment information, and building an artificial intelligence (AI)-powered doctor assistance app to provide the physician with diagnostic support in real-time. AI and machine learning increasingly will be embedded into everyday things such as appliances, speakers and hospital equipment. This phenomenon is closely aligned with the emergence of conversational systems, the expansion of the IoT into a digital mesh and the trend toward digital twins.
Three themes — intelligent, digital, and mesh — form the basis for the Top 10 strategic technology trends for 2017, announced by David Cearley, vice president and Gartner Fellow, at Gartner Symposium/ITxpo 2016 in Orlando, Florida. These technologies are just beginning to break out of an emerging state and stand to have substantial disruptive potential across industries.
AI and machine learning have reached a critical tipping point and will increasingly augment and extend virtually every technology enabled service, thing or application. Creating intelligent systems that learn, adapt and potentially act autonomously rather than simply execute predefined instructions is primary battleground for technology vendors through at least 2020.
Trend No. 1: AI & Advanced Machine Learning
AI and machine learning (ML), which include technologies such as deep learning, neural networks and natural-language processing, can also encompass more advanced systems that understand, learn, predict, adapt and potentially operate autonomously. Systems can learn and change future behavior, leading to the creation of more intelligent devices and programs. The combination of extensive parallel processing power, advanced algorithms and massive data sets to feed the algorithms has unleashed this new era.
In banking, you could use AI and machine-learning techniques to model current real-time transactions, as well as predictive models of transactions based on their likelihood of being fraudulent. Organizations seeking to drive digital innovation with this trend should evaluate a number of business scenarios in which AI and machine learning could drive clear and specific business value and consider experimenting with one or two high-impact scenarios..
Trend No. 2: Intelligent Apps
Intelligent apps, which include technologies like virtual personal assistants (VPAs), have the potential to transform the workplace by making everyday tasks easier (prioritizing emails) and its users more effective (highlighting important content and interactions). However, intelligent apps are not limited to new digital assistants – every existing software category from security tooling to enterprise applications such as marketing or ERP will be infused with AI enabled capabilities. Using AI, technology providers will focus on three areas — advanced analytics, AI-powered and increasingly autonomous business processes and AI-powered immersive, conversational and continuous interfaces. By 2018, Gartner expects most of the world’s largest 200 companies to exploit intelligent apps and utilize the full toolkit of big data and analytics tools to refine their offers and improve customer experience.
Trend No. 3: Intelligent Things
New intelligent things generally fall into three categories: robots, drones and autonomous vehicles. Each of these areas will evolve to impact a larger segment of the market and support a new phase of digital business but these represent only one facet of intelligent things. Existing things including IoT devices will become intelligent things delivering the power of AI enabled systems everywhere including the home, office, factory floor, and medical facility.
As intelligent things evolve and become more popular, they will shift from a stand-alone to a collaborative model in which intelligent things communicate with one another and act in concert to accomplish tasks. However, nontechnical issues such as liability and privacy, along with the complexity of creating highly specialized assistants, will slow embedded intelligence in some scenarios.
The lines between the digital and physical world continue to blur creating new opportunities for digital businesses. Look for the digital world to be an increasingly detailed reflection of the physical world and the digital world to appear as part of the physical world creating fertile ground for new business models and digitally enabled ecosystems.
Trend No. 4: Virtual & Augmented Reality
Virtual reality (VR) and augmented reality (AR) transform the way individuals interact with each other and with software systems creating an immersive environment. For example, VR can be used for training scenarios and remote experiences. AR, which enables a blending of the real and virtual worlds, means businesses can overlay graphics onto real-world objects, such as hidden wires on the image of a wall. Immersive experiences with AR and VR are reaching tipping points in terms of price and capability but will not replace other interface models. Over time AR and VR expand beyond visual immersion to include all human senses. Enterprises should look for targeted applications of VR and AR through 2020.
Trend No. 5: Digital Twin
Within three to five years, billions of things will be represented by digital twins, a dynamic software model of a physical thing or system. Using physics data on how the components of a thing operate and respond to the environment as well as data provided by sensors in the physical world, a digital twin can be used to analyze and simulate real world conditions, responds to changes, improve operations and add value. Digital twins function as proxies for the combination of skilled individuals (e.g., technicians) and traditional monitoring devices and controls (e.g., pressure gauges). Their proliferation will require a cultural change, as those who understand the maintenance of real-world things collaborate with data scientists and IT professionals. Digital twins of physical assets combined with digital representations of facilities and environments as well as people, businesses and processes will enable an increasingly detailed digital representation of the real world for simulation, analysis and control.
Trend No. 6: Blockchain
Blockchain is a type of distributed ledger in which value exchange transactions (in bitcoin or other token) are sequentially grouped into blocks. Blockchain and distributed-ledger concepts are gaining traction because they hold the promise of transforming industry operating models in industries such as music distribution, identify verification and title registry. They promise a model to add trust to untrusted environments and reduce business friction by providing transparent access to the information in the chain. While there is a great deal of interest the majority of blockchain initiatives are in alpha or beta phases and significant technology challenges exist.
The mesh refers to the dynamic connection of people, processes, things and services supporting intelligent digital ecosystems. As the mesh evolves, the user experience fundamentally changes and the supporting technology and security architectures and platforms must change as well.
Trend No. 7: Conversational Systems
Conversational systems can range from simple informal, bidirectional text or voice conversations such as an answer to “What time is it?” to more complex interactions such as collecting oral testimony from crime witnesses to generate a sketch of a suspect. Conversational systems shift from a model where people adapt to computers to one where the computer “hears” and adapts to a person’s desired outcome. Conversational systems do not use text/voice as the exclusive interface but enable people and machines to use multiple modalities (e.g., sight, sound, tactile, etc.) to communicate across the digital device mesh (e.g., sensors, appliances, IoT systems).
Trend No. 8: Mesh App and Service Architecture
The intelligent digital mesh will require changes to the architecture, technology and tools used to develop solutions. The mesh app and service architecture (MASA) is a multichannel solution architecture that leverages cloud and serverless computing, containers and microservices as well as APIs and events to deliver modular, flexible and dynamic solutions. Solutions ultimately support multiple users in multiple roles using multiple devices and communicating over multiple networks. However, MASA is a long term architectural shift that requires significant changes to development tooling and best practices.
Trend No. 9: Digital Technology Platforms
Digital technology platforms are the building blocks for a digital business and are necessary to break into digital. Every organization will have some mix of five digital technology platforms: Information systems, customer experience, analytics and intelligence, the Internet of Things and business ecosystems. In particular new platforms and services for IoT, AI and conversational systems will be a key focus through 2020. Companies should identify how industry platforms will evolve and plan ways to evolve their platforms to meet the challenges of digital business.
Trend No. 10: Adaptive Security Architecture
The evolution of the intelligent digital mesh and digital technology platforms and application architectures means that security has to become fluid and adaptive. Security in the IoT environment is particularly challenging. Security teams need to work with application, solution and enterprise architects to consider security early in the design of applications or IoT solutions. Multilayered security and use of user and entity behavior analytics will become a requirement for virtually every enterprise.
FEI-FEI LI IS a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.
Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.
Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.
The New Cloud
Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.
Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.
When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.
The New Microsoft
Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.
“With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.
The New Brains
In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.
Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.
The New Education
The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.
As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.
Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.
Hello, photographers. For the last two months, I’ve been doing market research for my projectPhotolemur and looking for different tools in the area of photo enhancement and photo editing. I spent a lot of time searching, and came up with a large organized list of 104 photo editing tools and apps that you should know about.
I believe all these services might be useful for some photographers, so I’ll share them here with you. And just to make it easier to find something specific, the list is numbered. Enjoy!
Table of contents
Photo enhancers (1-3)
Online editors (4-21)
Free desktop editors (22-26)
Paid desktop editors (27-40)
HDR photo editors (41-53)
Cross-platform image editors (54-57)
Photo filters (58-66)
Photo editing mobile apps (67-85)
RAW processors (86-96)
Photo viewers and managers (97-99)
1.Photolemur – The world’s first fully automated photo enhancement solution. It is powered by a special AI algorithm that fixes imperfections on images without human involvement (beta).
2.Softcolorsoftware – Automatic photo editor for batch photo enhancing, editing and color management.
3.Perfectly Clear – Photo editor with a set of automatic correction presets for Windows&Mac ($149)
4.Pixlr – High-end photo editing and quick filtering – in your browser (free)
5.Fotor – Overall photo enhancement in an easy-to-use package (free)
6.Sumopaint – The most versatile photo editor and painting application that works in a browser (free)
7.Irfanview – An image-viewer with added batch editing and conversion. rename a huge number of files in seconds, as well as resize them. Freeware (for non-commercial use)
19.FotoFlexer – Photo editor and advanced photo effects for free
20.Picture2life is an Ajax based photo editor. It’s focused on grabbing and editing images that are already online. The tool selection is average, and the user interface is poor.
21.Preloadr is a Flickr-specific tool that uses the Flickr API, even for account sign-in. The service includes basic cropping, sharpening, color correction and other tools to enhance images.
Free desktop editors
22.Photoscape – A simple, unusual editor that can handle more than just photos
23.Paint.net – Free image and photo editing software for PC
24.Krita – Digital painting and illustration application with CMYK support, HDR painting, G’MIC integration and more
25.Imagemagick – A software suite to create, edit, compose or convert images on the command line.
26.G’MIC – Full featured framework for image processing with different user interfaces, including a GIMP plugin to convert, manipulate, filter, and visualize image data. Available for Windows and OS
Paid desktop editors
27.Photoshop – Mother of all photo editors ($9.99/month)
28.Lightroom – A photo processor and image organizer developed by Adobe Systems for Windows and OS X ($9.99/month)
29.Capture One – Is a professional raw converter and image editing software designed for professional photographers who need to process large volumes of high quality images in a fast and efficient workflow (279 EUR)
30.Radlab – Combines intuitive browsing, gorgeous effects and a lightning-fast processing engine for image editing ($149)
31.Affinity – Professional photo editing software for Mac ($49.99)
73.Avatan – Photo Editor, Effects, Stickers and Touch Up (free)
74.Retrica – Camera app to record and share your experience with over 100 filters (free).
75.Aviary – Photo editing app (bought by Adobe) Make photos beautiful in seconds with stunning filters, frames, stickers, touch-up tools and more. Provide SDK for app developers (free)
76.Snapseed – A photo-editing application produced by Nik Software, a subsidiary of Google, for iOS and Android that enables users to enhance photos and apply digital filters (free).
77.Instagram – Instagram is a fun and quirky way to share your life with friends through a series of pictures. Snap a photo with your mobile phone, then choose a filter to transform the image into a memory to keep around forever. One of the most popular mobile photo apps (free)
78.Lifecake – Save and organise pictures of your children growing up with Lifecake. In a timeline free from the adverts and noise that clutter most social media channels, you can easily look back over fond memories and share them with family and friends (free)
79.Qwik – Edit your images in seconds with straightforward hands-on tools, and share them with Qwik’s online community. With new filters and features being added every week, Qwik is constantly keeping itself fresh and exciting (free).
80.VSCO Cam – VSCO Cam comes packed with top performance features, including high-resolution imports, and before and after comparisons to show how you built up your edit. Free (with paid filters $57/each)
81.Camera MX – The Android exclusive photo app Camera MX combines powerful enhancement tools with a beautifully simple user interface. Thanks to intelligent image processing you can take visibly sharper snaps, as well as cutting and trimming them to perfection in the edit. (free)
82.Lensical – Lensical makes creating face effects as simple as adding photo filters. Lensical is designed for larger displays and utilises one-handed gesture-based controls making it the perfect complement to the iPhone 6 and iPhone 6S Plus’s cameras (free).
83.Camera+ – The Camera app that comes on the iPhone by default is not brilliant: yes, you can use it to take some decent shots, but it doesn’t offer you much creative control. This is where Camera+ excels. The app has two parts: a camera and a photo editor, and it truly excels at the latter, with a huge range of advanced features($2.99).
84.PhotoWonder – Excellent user interface makes Photo Wonder one of the speediest smartphone photo apps to use. It also has a good collage feature with multiple layouts and photo booth effects. The filter selection isn’t huge, but many are so well-designed that you’ll find them far more valuable than sheer quantity from a lesser app. The ‘Vintage’ filter works magic on photos of buildings or scenery. (free)
85.Photoshop Express – As you would expect from Adobe, the interface and user experience of the Photoshop Express photo app for Apple and Android devices is faultless. It fulfils all the functions you need for picture editing and will probably be the one you turn to for sheer convenience. ‘Straighten’ and ‘Flip’ are two useful functions not included in many other apps (free).
86.RAW Pics.io – The most popular in-browser RAW files viewer and converter. Support the most DSLR RAW camera formats. ($1.99/month with free trial).
87.Rawtherapee – Is a cross-platform raw image processing program, released under the GNU General Public License Version 3 (free)
88.Darktable – An open source photography workflow application and RAW developer
89.UFRaw – A utility to read and manipulate raw images from digital cameras. It can be used on its own or as a GIMP plug-in.
90.Photivo – Handles your raw files, as well as your bitmap files, in a non-destructive 16 bit processing pipeline.
91.Filmulator – Streamlined raw management and editing application centered around a film-simulating tone mapping algorithm.
92.PhotoFlow – Raw and raster image processor featuring non-destructive adjustment layers and 32-bit floating-point accuracy.
93.LightZone – Open-source digital darkroom software for Windows/Mac/Linux
94.RAW Photo Processor – A Raw converter for Mac OS X (10.4-10.11), supporting almost all available digital Raw formats
95.Iridient Developer – A powerful RAW image conversion application designed and optimized specifically for Mac OS X. Iridient Developer supports RAW image formats from over 620 digital camera models ($99)
96.Photoninja – A professional-grade RAW converter that delivers exceptional detail, outstanding image quality, and a distinctive, natural look ($129)
Photo viewers and managers
97.Digicam – Advanced digital photo management application for importing and organizing photos (free)
98.gThumb – An image viewer and browser. It also includes an importer tool for transferring photos from cameras (free)
99.nomacs – Nomacs is a free, open source image viewer, which supports multiple platforms. You can use it for viewing all common image formats including raw and PSD images.
100.PortraitPro – PortraitPro is the world’s best-selling retouching software, that intelligently enhances every aspect of a portrait for beautiful results ($79.90)
101.Lucid – Stand-alone desktop software that makes it easy for lifestyle photography enthusiasts to improve pictures ($49)
102.Photomecanic – A standalone image browser and workflow accelerator that lets you view your digital photos with convenience and speed
You and I sift through a lot of data for our jobs. Data about website performance, sales performance, product adoption, customer service, marketing campaign results … the list goes on.
When you manage multiple content assets, such as social media or a blog, with multiple sources of data, it can get overwhelming. What should you be tracking? What actually matters? How do you visualize and analyze the data so you can extract insights and actionable information?
More importantly, how can you make reporting more efficient when you’re busy working on multiple projects at once?
One of the struggles that slows down my own reporting and analysis is understanding what type of chart to use — and why. That’s because choosing the wrong type of chart or simply defaulting to the most common type of visualization could cause confusion with the viewer or lead to mistaken data interpretation.