Data science is, among other things, a language, according to Robert Brunner, a professor in the School of Information Sciences at the University of Illinois. This concept might come as a shock to those who associate data science jobs with numbers alone. Data scientists increasingly work across entire organizations, and communication skills are as important as technical ability. Data science is booming in every industry, as more people and companies are investing their time to better understand this constantly expanding field. The ability to communicate effectively is a key talent differentiator. Whether you pursue a deeper knowledge of data science by learning a specialty, or simply want to gain a smart overview of the field, mastering the right terms will fast-track you to success on your educational and professional journey.
Here the top data science glossary terms to know about in 2020.
AI Chatbots–AI chatbots represent a class of software that is able to simulate a user conversation with a natural language through messaging applications. The main attraction of the technology is that it increases user response rate by being available 24/7 on your website in order to provide better customer satisfaction. Chatbots use machine learning and natural language processing (NLP) to deliver a near human like conversational experience.
AutoML–Automated machine learning or AutoML is the process of automating the end-to-end process of applying machine learning to achieve the goals of data science projects. AutoML is an attempt to make machine learning available to people without strong expertise in the field, although more realistically it is designed to help increase productivity of experienced data scientists by automating many steps in the data science process. Some of the advantages of using AutoML include: (i) increasing productivity by automating repetitive tasks which enables a data scientist to focus more on the problem rather than the models; (ii) automating components of the data pipeline helps to avoid errors that might slip in with manual processes; and (iii) AutoML is a step towards democratizing machine learning by making the power of machine learning accessible to those outside the data science team.
BERT–BERT (Bidirectional Encoder Representations from Transformers) – It was introduced in a recent paper published by researchers at Google AI Language. It has caused disruption in the machine learning community by presenting state-of-the-art results in a wide variety of NLP tasks. BERT's main technical advance is applying the bidirectional training of Transformer, a popular attention model, to language modeling. This direction is in contrast to prior efforts which examined a sequence of text either from left to right or combined left-to-right and right-to-left training. BERT's methodology shows that a language model which is bidirectionally trained is able to have a deeper sense of language context and flow than single-direction language models.
Cognitive computing– Cognitive computing is based on self-learning systems that use machine-learning techniques to perform specific, human-like tasks in an intelligent way. The main goal of cognitive computing is to simulate human thought processes using a computerized model. With self-learning algorithms that use pattern recognition and natural language processing, the computer is able to imitate the way the human brain functions.
Data pipeline– Data scientists depend on data pipelines to encapsulate a number of processing steps required to prepare data for machine learning. These steps may include acquiring data sets from various data sources, performing "data prep" operations such as cleansing data and handling missing data and outliers, and also transforming data into a form better suited for machine learning. A data pipeline also includes training or fitting a model and determining its accuracy. Data pipelines are typically automated so their steps may be performed on a continued basis.
Edge analytics–Edge analytics is a method of performing data collection and analysis where an analytical computation is performed on data at the point of collection (e.g. a sensor) instead of waiting for the data to be sent back to a centralized data store and then analyzed. Edge analytics has come into favor as the IoT model of connected devices has become more established. In many enterprises, streaming data from various company operations connected to IoT networks creates a massive amount of operational data which can be difficult and expensive to manage. By running the data through an analytics process as it is collected, at the "edge" of a network, it's possible to establish a filter for what information is worth sending to a central data store for later use.
GANs–Generative adversarial networks (GANs) are deep neural network architectures comprised of two nets pitting one against the other, e.g. the term "adversarial"). The theory of GANs was first introduced in a 2014 paper by deep learning luminary Ian Goodfellow and other researchers at the University of Montreal, including Yoshua Bengio. The potential of GANs is significant because they are generative models in that they create new data instances that resemble training data. For example, GANs can create images that look like photographs of human faces, even though the faces don't belong to any real person.
Geospatial analytics– Geospatial analytics is a technology that works to gather, manipulate and display geographic information system (GIS) data (e.g. GPS data) and imagery (e.g. satellite photographs). Geospatial analytics uses geographic coordinates as well as specific identifier variables such as street address and zip code. The technology is used to create geographic models and data visualizations for more accurate modeling and predictions.
Graph database– A graph database uses "graph theory" to store, map and query relationships of data elements. Essentially, a graph database is a collection of what are known as nodes and edges. A node represents an entity such as a product or customer, while an edge represents a connection or relationship between two nodes. Each node contained in a graph database is defined by–a unique identifier, a set of outgoing edges and/or incoming edges, in addition to a set of key/value pairs. Each edge is defined by a unique identifier, a starting-place and/or an ending-place node along with a set of properties. Graph databases are well-suited for analyzing interconnections.
Julia– Whether you're a data scientist who uses the most popular programming languages, R or Python, you still should be aware of a relatively new language that was designed from the ground up for data science applications. Julia was officially announced in 2012 in a blog post. The designers of the language and two others founded Julia Computing in July 2015 to "develop products that make Julia easy to use, easy to deploy, and easy to scale." Julia is a free open source, high-level programming language for numerical computing. It has the convenience of a dynamic language with the performance of a compiled statically typed language, by way of a JIT-compiler that generates native machine code, and also a design that implements type stability through specialization via multiple dispatch, making it easy to compile to efficient code.
Low-code/No-code– You may see the terms "low-code" and/or "no-code" being mentioned a lot these days. Many new products, along with some mature products, are being re-branded as adopting low-code/no-code methodologies. Simply defined, a low-code/no-code development platform is a visual integrated development environment that allows citizen developers to drag and drop application components, connect them together and create a finished application. Many enterprise BI platforms fall into this platform category.
Moreover, according to Vinod Bakthavachalam, a senior data scientist at Coursera, using the following data science terms accurately will help you stand out from the crowd:
Business Intelligence (BI)- BI is the process of analyzing and reporting historical data to guide future decision-making. BI helps leaders make better strategic decisions moving forward by determining what happened in the past using data, like sales statistics and operational metrics.
Data Engineering- Data engineers build the infrastructure through which data is gathered, cleaned, stored and prepped for use by data scientists. Good engineers are invaluable, and building a data science team without them is a "cart before the horse" approach.
Decision Science- Under the umbrella of data science, decision scientists apply math and technology to solve business problems and add in behavioral science and design thinking (a process that aims to better understand the end user).
Artificial Intelligence (AI)- AI computer systems can perform tasks that normally require human intelligence. This doesn't necessarily mean replicating the human mind, but instead involves using human reasoning as a model to provide better services or create better products, such as speech recognition, decision-making and language translation.
Machine Learning- A subset of AI, machine learning refers to the process by which a system learns from inputted data by identifying patterns in that data, and then applying those patterns to new problems or requests. It allows data scientists to teach a computer to carry out tasks, rather than programming it to carry out each task step-by-step. It's used, for example, to learn a consumer's preferences and buying patterns to recommend products on Amazon or sift through resumes to identify the highest-potential job candidates based on key words and phrases.
Supervised Learning- This is a specific type of machine learning that involves the data scientist acting as a guide to teach the desired conclusion to the algorithm. For instance, the computer learns to identify animals by being trained on a dataset of images that are properly labeled with each species and its characteristics.
Classification- It is an example of supervised learning in which an algorithm puts a new piece of data under a pre-existing category, based on a set of characteristics for which the category is already known. For example, it can be used to determine if a customer is likely to spend over $20 online, based on their similarity to other customers who have previously spent that amount.
Cross validation– It is a method to validate the stability, or accuracy, of your machine-learning model. Although there are several types of cross validation, the most basic one involves splitting your training set in two and training the algorithm on one subset before applying it the second subset. Because you know what output you should receive, you can assess a model's validity.
Clustering- It is classification but without the supervised learning aspect. With clustering, the algorithm receives inputted data and finds similarities in the data itself by grouping data points together that are alike.
Deep Learning – A more advanced form of machine learning, deep learning refers to systems with multiple input/output layers, as opposed to shallow systems with one input/output layer. In deep learning, there are several rounds of data input/output required to assist computers to solve complex, real-world problems. A deep dive can be found here.
Linear Regression- Linear regression models the relationship between two variables by fitting a linear equation to the observed data. By doing so, you can predict an unknown variable based on its related known variable. A simple example is the relationship between an individual's height and weight.
A/B Testing- Generally used in product development, A/B testing is a randomized experiment in which you test two variants to determine the best course of action. For example, Google famously tested various shades of blue to determine which shade earned the most clicks.
Hypothesis Testing- Hypothesis testing is the use of statistics to determine the probability that a given hypothesis is true. It's frequently used in clinical research.
Statistical Power- Statistical power is the probability of making the correct decision to reject the null hypothesis when the null hypothesis is false. In other words, it's the likelihood a study will detect an effect when there is an effect to be detected. A high statistical power means a lower likelihood of concluding incorrectly that a variable has no effect.
Standard Error- Standard error is the measure of the statistical accuracy of an estimate. A larger sample size decreases the standard error.
Causal inference- It is a process that tests whether there is a relationship between cause and effect in a given situation—the goal of many data analyses in social and health sciences. They typically require not only good data and algorithms, but also subject-matter expertise.
Exploratory Data Analysis (EDA)- EDA is often the first step when analyzing datasets. With EDA techniques, data scientists can summarize a dataset's main characteristics and inform the development of more complex models or logical next steps.
Data Visualization- A key component of data science, data visualizations are the visual representations of text-based information to better detect and recognize patterns, trends and correlations. It helps people understand the significance of data by placing it in a visual context.
R- R is a programming language and software environment for statistical computing. The R language is widely used among statisticians and data miners for developing statistical software and data analysis.
Python- Python is a programming language for general-purpose programming and is one language used to manipulate and store data. Many highly trafficked websites, such as YouTube, are created using Python.
SQL- Structured Query Language, or SQL, is another programming language that is used to perform tasks, such as updating or retrieving data for a database.
ETL- ETL is a type of data integration that refers to the three steps (extract, transform, load) used to blend data from multiple sources. It's often deployed to build a data warehouse. An important aspect of this data warehousing is that it consolidates data from multiple sources and transforms it into a common, useful format. For example, ETL normalizes data from multiple business departments and processes to make it standardized and consistent.
GitHub- GitHub is a code-sharing and publishing service, as well as a community for developers. It provides access control and several collaboration features, such as bug tracking, feature requests, task management and wikis for every project. GitHub offers both private repositories and free accounts, which are commonly used to host open-source software projects.
Data Models- It defines how datasets are connected to each other and how they are processed and stored inside a system. Data models show the structure of a database, including the relationships and constraints, which helps data scientists understand how the data can best be stored and manipulated.
Data Warehouse- A data warehouse is a repository where all the data collected by an organization is stored and used as a guide to make management decisions.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.