text
stringlengths 12
14.7k
|
|---|
Sentence embedding : In natural language processing, a sentence embedding is a representation of a sentence as a vector of numbers which encodes meaningful semantic information. State of the art embeddings are based on the learned hidden layer representation of dedicated sentence transformer models. BERT pioneered an approach involving the use of a dedicated [CLS] token prepended to the beginning of each sentence inputted into the model; the final hidden state vector of this token encodes information about the sentence and can be fine-tuned for use in sentence classification tasks. In practice however, BERT's sentence embedding with the [CLS] token achieves poor performance, often worse than simply averaging non-contextual word embeddings. SBERT later achieved superior sentence embedding performance by fine tuning BERT's [CLS] token embeddings through the usage of a siamese neural network architecture on the SNLI dataset. Other approaches are loosely based on the idea of distributional semantics applied to sentences. Skip-Thought trains an encoder-decoder structure for the task of neighboring sentences predictions; this has been shown to achieve worse performance than approaches such as InferSent or SBERT. An alternative direction is to aggregate word embeddings, such as those returned by Word2vec, into sentence embeddings. The most straightforward approach is to simply compute the average of word vectors, known as continuous bag-of-words (CBOW). However, more elaborate solutions based on word vector quantization have also been proposed. One such approach is the vector of locally aggregated word embeddings (VLAWE), which demonstrated performance improvements in downstream text classification tasks.
|
Sentence embedding : In recent years, sentence embedding has seen a growing level of interest due to its applications in natural language queryable knowledge bases through the usage of vector indexing for semantic search. LangChain for instance utilizes sentence transformers for purposes of indexing documents. In particular, an indexing is generated by generating embeddings for chunks of documents and storing (document chunk, embedding) tuples. Then given a query in natural language, the embedding for the query can be generated. A top k similarity search algorithm is then used between the query embedding and the document chunk embeddings to retrieve the most relevant document chunks as context information for question answering tasks. This approach is also known formally as retrieval-augmented generation Though not as predominant as BERTScore, sentence embeddings are commonly used for sentence similarity evaluation which sees common use for the task of optimizing a Large language model's generation parameters is often performed via comparing candidate sentences against reference sentences. By using the cosine-similarity of the sentence embeddings of candidate and reference sentences as the evaluation function, a grid-search algorithm can be utilized to automate hyperparameter optimization .
|
Sentence embedding : A way of testing sentence encodings is to apply them on Sentences Involving Compositional Knowledge (SICK) corpus for both entailment (SICK-E) and relatedness (SICK-R). In the best results are obtained using a BiLSTM network trained on the Stanford Natural Language Inference (SNLI) Corpus. The Pearson correlation coefficient for SICK-R is 0.885 and the result for SICK-E is 86.3. A slight improvement over previous scores is presented in: SICK-R: 0.888 and SICK-E: 87.8 using a concatenation of bidirectional Gated recurrent unit.
|
Sentence embedding : Distributional semantics Word embedding
|
Sentence embedding : InferSent sentence embeddings and training code Universal Sentence Encoder Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning == References ==
|
Generalized Hebbian algorithm : The generalized Hebbian algorithm, also known in the literature as Sanger's rule, is a linear feedforward neural network for unsupervised learning with applications primarily in principal components analysis. First defined in 1989, it is similar to Oja's rule in its formulation and stability, except it can be applied to networks with multiple outputs. The name originates because of the similarity between the algorithm and a hypothesis made by Donald Hebb about the way in which synaptic strengths in the brain are modified in response to experience, i.e., that changes are proportional to the correlation between the firing of pre- and post-synaptic neurons.
|
Generalized Hebbian algorithm : Consider a problem of learning a linear code for some data. Each data is a multi-dimensional vector x ∈ R n ^ , and can be (approximately) represented as a linear sum of linear code vectors w 1 , … , w m ,\dots ,w_ . When m = n , it is possible to exactly represent the data. If m < n , it is possible to approximately represent the data. To minimize the L2 loss of representation, w 1 , … , w m ,\dots ,w_ should be the highest principal component vectors. The generalized Hebbian algorithm is an iterative algorithm to find the highest principal component vectors, in an algorithmic form that resembles unsupervised Hebbian learning in neural networks. Consider a one-layered neural network with n input neurons and m output neurons y 1 , … , y m ,\dots ,y_ . The linear code vectors are the connection strengths, that is, w i j is the synaptic weight or connection strength between the j -th input and i -th output neurons. The generalized Hebbian algorithm learning rule is of the form Δ w i j = η y i ( x j − ∑ k = 1 i w k j y k ) ~=~\eta y_\left(x_-\sum _^w_y_\right) where η is the learning rate parameter.
|
Generalized Hebbian algorithm : The generalized Hebbian algorithm is used in applications where a self-organizing map is necessary, or where a feature or principal components analysis can be used. Examples of such cases include artificial intelligence and speech and image processing. Its importance comes from the fact that learning is a single-layer process—that is, a synaptic weight changes only depending on the response of the inputs and outputs of that layer, thus avoiding the multi-layer dependence associated with the backpropagation algorithm. It also has a simple and predictable trade-off between learning speed and accuracy of convergence as set by the learning rate parameter η. As an example, (Olshausen and Field, 1996) performed the generalized Hebbian algorithm on 8-by-8 patches of photos of natural scenes, and found that it results in Fourier-like features. The features are the same as the principal components found by principal components analysis, as expected, and that, the features are determined by the 64 × 64 variance matrix of the samples of 8-by-8 patches. In other words, it is determined by the second-order statistics of the pixels in images. They criticized this as insufficient to capture higher-order statistics which are necessary to explain the Gabor-like features of simple cells in the primary visual cortex.
|
Generalized Hebbian algorithm : Hebbian learning Factor analysis Contrastive Hebbian learning Oja's rule == References ==
|
ROCm : ROCm is an Advanced Micro Devices (AMD) software stack for graphics processing unit (GPU) programming. ROCm spans several domains, including general-purpose computing on graphics processing units (GPGPU), high performance computing (HPC), and heterogeneous computing. It offers several programming models: HIP (GPU-kernel-based programming), OpenMP (directive-based programming), and OpenCL. ROCm is free, libre and open-source software (except the GPU firmware blobs), and it is distributed under various licenses. ROCm initially stood for Radeon Open Compute platform; however, due to Open Compute being a registered trademark, ROCm is no longer an acronym — it is simply AMD's open-source stack designed for GPU compute.
|
ROCm : The first GPGPU software stack from ATI/AMD was Close to Metal, which became Stream. ROCm was launched around 2016 with the Boltzmann Initiative. ROCm stack builds upon previous AMD GPU stacks; some tools trace back to GPUOpen and others to the Heterogeneous System Architecture (HSA).
|
ROCm : ROCm as a stack ranges from the kernel driver to the end-user applications. AMD has introductory videos about AMD GCN hardware, and ROCm programming via its learning portal. One of the best technical introductions about the stack and ROCm/HIP programming, remains, to date, to be found on Reddit.
|
ROCm : ROCm is primarily targeted at discrete professional GPUs, but unofficial support includes the Vega family and RDNA 2 consumer GPUs. Accelerated Processor Units (APU) are "enabled", but not officially supported. Having ROCm functional there is involved.
|
ROCm : There is one kernel-space component, ROCk, and the rest - there is roughly a hundred components in the stack - is made of user-space modules. The unofficial typographic policy is to use: uppercase ROC lowercase following for low-level libraries, i.e. ROCt, and the contrary for user-facing libraries, i.e. rocBLAS. AMD is active developing with the LLVM community, but upstreaming is not instantaneous, and as of January 2022, is still lagging. AMD still officially packages various LLVM forks for parts that are not yet upstreamed – compiler optimizations destined to remain proprietary, debug support, OpenMP offloading, etc.
|
ROCm : ROCm competes with other GPU computing stacks: Nvidia CUDA and Intel OneAPI.
|
ROCm : AMD Software – a general overview of AMD's drivers, APIs, and development endeavors. GPUOpen – AMD's complementary graphics stack AMD Radeon Software – AMD's software distribution channel
|
ROCm : "ROCm official documentation". AMD. February 10, 2022. "ROCm Learning Center". AMD. January 25, 2022. "ROCm official documentation on the github super-project". AMD. January 25, 2022. "ROCm official documentation - pre 5.0". AMD. January 19, 2022. "GPU-Accelerated Applications with AMD Instinct Accelerators & AMD ROCm Software" (PDF). AMD. January 25, 2022. "AMD Infinity Hub". AMD. January 25, 2022. — Docker containers for scientific applications.
|
PaLM : PaLM (Pathways Language Model) is a 540 billion-parameter dense decoder-only transformer-based large language model (LLM) developed by Google AI. Researchers also trained smaller versions of PaLM (with 8 and 62 billion parameters) to test the effects of model scale. PaLM is capable of a wide range of tasks, including commonsense reasoning, arithmetic reasoning, joke explanation, code generation, and translation. When combined with chain-of-thought prompting, PaLM achieved significantly better performance on datasets requiring reasoning of multiple steps, such as word problems and logic-based questions. The model was first announced in April 2022 and remained private until March 2023, when Google launched an API for PaLM and several other technologies. The API was initially available to a limited number of developers who joined a waitlist before it was released to the public. Google and DeepMind developed a version of PaLM 540B (the parameter count, 540 billion), called Med-PaLM, that is fine-tuned on medical data and outperforms previous models on medical question answering benchmarks. Med-PaLM was the first to obtain a passing score on U.S. medical licensing questions, and in addition to answering both multiple choice and open-ended questions accurately, it also provides reasoning and is able to evaluate its own responses. Google also extended PaLM using a vision transformer to create PaLM-E, a state-of-the-art vision-language model that can be used for robotic manipulation. The model can perform tasks in robotics competitively without the need for retraining or fine-tuning. In May 2023, Google announced PaLM 2 at the annual Google I/O keynote. PaLM 2 is reported to be a 340 billion-parameter model trained on 3.6 trillion tokens. In June 2023, Google announced AudioPaLM for speech-to-speech translation, which uses the PaLM-2 architecture and initialization.
|
PaLM : PaLM is pre-trained on a high-quality corpus of 780 billion tokens that comprise various natural language tasks and use cases. This dataset includes filtered webpages, books, Wikipedia articles, news articles, source code obtained from open source repositories on GitHub, and social media conversations. It is based on the dataset used to train Google's LaMDA model. The social media conversation portion of the dataset makes up 50% of the corpus, which aids the model in its conversational capabilities. PaLM 540B was trained over two TPU v4 Pods with 3,072 TPU v4 chips in each Pod attached to 768 hosts, connected using a combination of model and data parallelism, which was the largest TPU configuration. This allowed for efficient training at scale, using 6,144 chips, and marked a record for the highest training efficiency achieved for LLMs at this scale: a hardware FLOPs utilization of 57.8%.
|
PaLM : LaMDA, PaLM's predecessor Gemini, PaLM's successor Chinchilla == References ==
|
Probabilistic numerics : Probabilistic numerics is an active field of study at the intersection of applied mathematics, statistics, and machine learning centering on the concept of uncertainty in computation. In probabilistic numerics, tasks in numerical analysis such as finding numerical solutions for integration, linear algebra, optimization and simulation and differential equations are seen as problems of statistical, probabilistic, or Bayesian inference.
|
Probabilistic numerics : A numerical method is an algorithm that approximates the solution to a mathematical problem (examples below include the solution to a linear system of equations, the value of an integral, the solution of a differential equation, the minimum of a multivariate function). In a probabilistic numerical algorithm, this process of approximation is thought of as a problem of estimation, inference or learning and realised in the framework of probabilistic inference (often, but not always, Bayesian inference). Formally, this means casting the setup of the computational problem in terms of a prior distribution, formulating the relationship between numbers computed by the computer (e.g. matrix-vector multiplications in linear algebra, gradients in optimization, values of the integrand or the vector field defining a differential equation) and the quantity in question (the solution of the linear problem, the minimum, the integral, the solution curve) in a likelihood function, and returning a posterior distribution as the output. In most cases, numerical algorithms also take internal adaptive decisions about which numbers to compute, which form an active learning problem. Many of the most popular classic numerical algorithms can be re-interpreted in the probabilistic framework. This includes the method of conjugate gradients, Nordsieck methods, Gaussian quadrature rules, and quasi-Newton methods. In all these cases, the classic method is based on a regularized least-squares estimate that can be associated with the posterior mean arising from a Gaussian prior and likelihood. In such cases, the variance of the Gaussian posterior is then associated with a worst-case estimate for the squared error. Probabilistic numerical methods promise several conceptual advantages over classic, point-estimate based approximation techniques: They return structured error estimates (in particular, the ability to return joint posterior samples, i.e. multiple realistic hypotheses for the true unknown solution of the problem) Hierarchical Bayesian inference can be used to set and control internal hyperparameters in such methods in a generic fashion, rather than having to re-invent novel methods for each parameter Since they use and allow for an explicit likelihood describing the relationship between computed numbers and target quantity, probabilistic numerical methods can use the results of even highly imprecise, biased and stochastic computations. Conversely, probabilistic numerical methods can also provide a likelihood in computations often considered "likelihood-free" elsewhere Because all probabilistic numerical methods use essentially the same data type – probability measures – to quantify uncertainty over both inputs and outputs they can be chained together to propagate uncertainty across large-scale, composite computations Sources from multiple sources of information (e.g. algebraic, mechanistic knowledge about the form of a differential equation, and observations of the trajectory of the system collected in the physical world) can be combined naturally and inside the inner loop of the algorithm, removing otherwise necessary nested loops in computation, e.g. in inverse problems. These advantages are essentially the equivalent of similar functional advantages that Bayesian methods enjoy over point-estimates in machine learning, applied or transferred to the computational domain.
|
Probabilistic numerics : The interplay between numerical analysis and probability is touched upon by a number of other areas of mathematics, including average-case analysis of numerical methods, information-based complexity, game theory, and statistical decision theory. Precursors to what is now being called "probabilistic numerics" can be found as early as the late 19th and early 20th century. The origins of probabilistic numerics can be traced to a discussion of probabilistic approaches to polynomial interpolation by Henri Poincaré in his Calcul des Probabilités. In modern terminology, Poincaré considered a Gaussian prior distribution on a function f : R → R \to \mathbb , expressed as a formal power series with random coefficients, and asked for "probable values" of f ( x ) given this prior and n ∈ N observations f ( a i ) = B i )=B_ for i = 1 , … , n . A later seminal contribution to the interplay of numerical analysis and probability was provided by Albert Suldin in the context of univariate quadrature. The statistical problem considered by Suldin was the approximation of the definite integral ∫ u ( t ) d t t of a function u : [ a , b ] → R , under a Brownian motion prior on u , given access to pointwise evaluation of u at nodes t 1 , … , t n ∈ [ a , b ] ,\dots ,t_\in [a,b] . Suldin showed that, for given quadrature nodes, the quadrature rule with minimal mean squared error is the trapezoidal rule; furthermore, this minimal error is proportional to the sum of cubes of the inter-node spacings. As a result, one can see the trapezoidal rule with equally-spaced nodes as statistically optimal in some sense — an early example of the average-case analysis of a numerical method. Suldin's point of view was later extended by Mike Larkin. Note that Suldin's Brownian motion prior on the integrand u is a Gaussian measure and that the operations of integration and of point wise evaluation of u are both linear maps. Thus, the definite integral ∫ u ( t ) d t t is a real-valued Gaussian random variable. In particular, after conditioning on the observed pointwise values of u , it follows a normal distribution with mean equal to the trapezoidal rule and variance equal to 1 12 ∑ i = 2 n ( t i − t i − 1 ) 3 \sum _^(t_-t_)^ . This viewpoint is very close to that of Bayesian quadrature, seeing the output of a quadrature method not just as a point estimate but as a probability distribution in its own right. As noted by Houman Owhadi and collaborators, interplays between numerical approximation and statistical inference can also be traced back to Palasti and Renyi, Sard, Kimeldorf and Wahba (on the correspondence between Bayesian estimation and spline smoothing/interpolation) and Larkin (on the correspondence between Gaussian process regression and numerical approximation). Although the approach of modelling a perfectly known function as a sample from a random process may seem counterintuitive, a natural framework for understanding it can be found in information-based complexity (IBC), the branch of computational complexity founded on the observation that numerical implementation requires computation with partial information and limited resources. In IBC, the performance of an algorithm operating on incomplete information can be analyzed in the worst-case or the average-case (randomized) setting with respect to the missing information. Moreover, as Packel observed, the average case setting could be interpreted as a mixed strategy in an adversarial game obtained by lifting a (worst-case) minmax problem to a minmax problem over mixed (randomized) strategies. This observation leads to a natural connection between numerical approximation and Wald's decision theory, evidently influenced by von Neumann's theory of games. To describe this connection consider the optimal recovery setting of Micchelli and Rivlin in which one tries to approximate an unknown function from a finite number of linear measurements on that function. Interpreting this optimal recovery problem as a zero-sum game where Player I selects the unknown function and Player II selects its approximation, and using relative errors in a quadratic norm to define losses, Gaussian priors emerge as optimal mixed strategies for such games, and the covariance operator of the optimal Gaussian prior is determined by the quadratic norm used to define the relative error of the recovery.
|
Probabilistic numerics : ProbNum: Probabilistic Numerics in Python. ProbNumDiffEq.jl: Probabilistic numerical ODE solvers based on filtering implemented in Julia. Emukit: Adaptable Python toolbox for decision-making under uncertainty. BackPACK: Built on top of PyTorch. It efficiently computes quantities other than the gradient.
|
Probabilistic numerics : Average-case analysis Information-based complexity Uncertainty quantification == References ==
|
Hybrid intelligent system : Hybrid intelligent system denotes a software system which employs, in parallel, a combination of methods and techniques from artificial intelligence subfields, such as: Neuro-symbolic systems Neuro-fuzzy systems Hybrid connectionist-symbolic models Fuzzy expert systems Connectionist expert systems Evolutionary neural networks Genetic fuzzy systems Rough fuzzy hybridization Reinforcement learning with fuzzy, neural, or evolutionary methods as well as symbolic reasoning methods. From the cognitive science perspective, every natural intelligent system is hybrid because it performs mental operations on both the symbolic and subsymbolic levels. For the past few years, there has been an increasing discussion of the importance of A.I. Systems Integration. Based on notions that there have already been created simple and specific AI systems (such as systems for computer vision, speech synthesis, etc., or software that employs some of the models mentioned above) and now is the time for integration to create broad AI systems. Proponents of this approach are researchers such as Marvin Minsky, Ron Sun, Aaron Sloman, Angelo Dalli and Michael A. Arbib. An example hybrid is a hierarchical control system in which the lowest, reactive layers are sub-symbolic. The higher layers, having relaxed time constraints, are capable of reasoning from an abstract world model and performing planning. Intelligent systems usually rely on hybrid reasoning processes, which include induction, deduction, abduction and reasoning by analogy.
|
Hybrid intelligent system : AI alignment AI effect Applications of artificial intelligence Artificial intelligence systems integration Intelligent control Lists List of emerging technologies Outline of artificial intelligence
|
Hybrid intelligent system : R. Sun & L. Bookman, (eds.), Computational Architectures Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Needham, MA. 1994. http://www.cogsci.rpi.edu/~rsun/book2-ann.html Archived 2009-05-05 at the Wayback Machine S. Wermter and R. Sun, (eds.) Hybrid Neural Systems. Springer-Verlag, Heidelberg. 2000. http://www.cogsci.rpi.edu/~rsun/book4-ann.html Archived 2009-09-24 at the Wayback Machine R. Sun and F. Alexandre, (eds.) Connectionist-Symbolic Integration. Lawrence Erlbaum Associates, Mahwah, NJ. 1997. Ibaraki, S. Hybrid Intelligence interview with Angelo Dalli in IEEE Technology and Management Society. 2024. Albus, J. S., Bostelman, R., Chang, T., Hong, T., Shackleford, W., and Shneier, M. Learning in a Hierarchical Control System: 4D/RCS in the DARPA LAGR Program NIST, 2006 A.S. d'Avila Garcez, Luis C. Lamb & Dov M. Gabbay. Neural-Symbolic Cognitive Reasoning. Cognitive Technologies, Springer (2009). ISBN 978-3-540-73245-7. International Journal of Hybrid Intelligent Systems http://www.iospress.nl/html/14485869.php Archived 2005-12-11 at the Wayback Machine International Conference on Hybrid Intelligent Systems http://his.hybridsystem.com/ HIS'01: http://www.softcomputing.net/his01/ HIS'02: https://web.archive.org/web/20060209160923/http://tamarugo.cec.uchile.cl/~his02/ HIS'03: http://www.softcomputing.net/his03/ HIS'04: https://web.archive.org/web/20060303051902/http://www.cs.nmt.edu/~his04/ HIS'05: https://web.archive.org/web/20051223013031/http://www.ica.ele.puc-rio.br/his05/ HIS'06 https://web.archive.org/web/20110510025133/http://his-ncei06.kedri.info/ HIS'7 September 17–19, 2007, Kaiserslautern, Germany, http://www.eit.uni-kl.de/koenig/HIS07_Web/his07main.html hybrid systems resources: http://www.cogsci.rpi.edu/~rsun/hybrid-resource.html Archived 2009-09-25 at the Wayback Machine
|
GPT Store : The GPT Store is a platform developed by OpenAI that enables users and developers to create, publish, and monetize GPTs without requiring advanced programming skills. GPTs are custom applications built using the artificial intelligence chatbot known as ChatGPT.
|
GPT Store : The GPT Store was announced in October 2023 and launched in January 2024. According to OpenAI, the platform aims to democratize access to advanced artificial intelligence and facilitate the creation of custom chatbot applications without requiring advanced programming skills. The platform has garnered attention from developers and companies for its innovative potential and monetization opportunities. Initially available only to paying customers, access to the GPT Store became free in May 2024.
|
GPT Store : The GPT Store allows users to create and customize chatbots, known as GPTs, tailored to various needs such as customer service, personal assistance, video and image creation, and more. GPTs are categorized into various sections, including Programming, Education, and Research. The platform is designed to be user-friendly, with intuitive tools that do not require advanced technical knowledge. Creators of GPTs will have the opportunity to monetize their applications through various business models, including subscriptions and pay-per-use. The GPT Store also features a star-based rating system for users to evaluate GPTs, similar to other app stores such as Apple's App Store and Google Play.
|
GPT Store : Despite its initial success, the GPT Store has faced criticism concerning potential copyright violations. Some users and companies have expressed concerns about the use of AI-generated content that may infringe on intellectual property rights. For instance, a teacher has alleged that some students created GPTs that provided access to content from copyrighted books. == References ==
|
Bayesian programming : Bayesian programming is a formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available. Edwin T. Jaynes proposed that probability could be considered as an alternative and an extension of logic for rational reasoning with incomplete and uncertain information. In his founding book Probability Theory: The Logic of Science he developed this theory and proposed what he called “the robot,” which was not a physical device, but an inference engine to automate probabilistic reasoning—a kind of Prolog for probability instead of logic. Bayesian programming is a formal and concrete implementation of this "robot". Bayesian programming may also be seen as an algebraic formalism to specify graphical models such as, for instance, Bayesian networks, dynamic Bayesian networks, Kalman filters or hidden Markov models. Indeed, Bayesian Programming is more general than Bayesian networks and has a power of expression equivalent to probabilistic factor graphs.
|
Bayesian programming : A Bayesian program is a means of specifying a family of probability distributions. The constituent elements of a Bayesian program are presented below: Program (\pi )\\\\\\\end\\\delta )\end\\\end A program is constructed from a description and a question. A description is constructed using some specification ( π ) as given by the programmer and an identification or learning process for the parameters not completely specified by the specification, using a data set ( δ ). A specification is constructed from a set of pertinent variables, a decomposition and a set of forms. Forms are either parametric forms or questions to other Bayesian programs. A question specifies which probability distribution has to be computed.
|
Bayesian programming : The comparison between probabilistic approaches (not only bayesian programming) and possibility theories continues to be debated. Possibility theories like, for instance, fuzzy sets, fuzzy logic and possibility theory are alternatives to probability to model uncertainty. They argue that probability is insufficient or inconvenient to model certain aspects of incomplete/uncertain knowledge. The defense of probability is mainly based on Cox's theorem, which starts from four postulates concerning rational reasoning in the presence of uncertainty. It demonstrates that the only mathematical framework that satisfies these postulates is probability theory. The argument is that any approach other than probability necessarily infringes one of these postulates and the value of that infringement.
|
Bayesian programming : The purpose of probabilistic programming is to unify the scope of classical programming languages with probabilistic modeling (especially bayesian networks) to deal with uncertainty while profiting from the programming languages' expressiveness to encode complexity. Extended classical programming languages include logical languages as proposed in Probabilistic Horn Abduction, Independent Choice Logic, PRISM, and ProbLog which proposes an extension of Prolog. It can also be extensions of functional programming languages (essentially Lisp and Scheme) such as IBAL or CHURCH. The underlying programming languages can be object-oriented as in BLOG and FACTORIE or more standard ones as in CES and FIGARO. The purpose of Bayesian programming is different. Jaynes' precept of "probability as logic" argues that probability is an extension of and an alternative to logic above which a complete theory of rationality, computation and programming can be rebuilt. Bayesian programming attempts to replace classical languages with a programming approach based on probability that considers incompleteness and uncertainty. The precise comparison between the semantics and power of expression of Bayesian and probabilistic programming is an open question.
|
Bayesian programming : Kamel Mekhnacha (2013). Bayesian Programming. Chapman and Hall/CRC. doi:10.1201/b16111. ISBN 978-1-4398-8032-6.
|
Bayesian programming : A companion site to the Bayesian programming book where to download ProBT an inference engine dedicated to Bayesian programming. The Bayesian-programming.org site Archived 2013-11-23 at archive.today for the promotion of Bayesian programming with detailed information and numerous publications.
|
Electricity price forecasting : Electricity price forecasting (EPF) is a branch of energy forecasting which focuses on using mathematical, statistical and machine learning models to predict electricity prices in the future. Over the last 30 years electricity price forecasts have become a fundamental input to energy companies’ decision-making mechanisms at the corporate level. Since the early 1990s, the process of deregulation and the introduction of competitive electricity markets have been reshaping the landscape of the traditionally monopolistic and government-controlled power sectors. Throughout Europe, North America, Australia and Asia, electricity is now traded under market rules using spot and derivative contracts. However, electricity is a very special commodity: it is economically non-storable and power system stability requires a constant balance between production and consumption. At the same time, electricity demand depends on weather (temperature, wind speed, precipitation, etc.) and the intensity of business and everyday activities (on-peak vs. off-peak hours, weekdays vs. weekends, holidays, etc.). These unique characteristics lead to price dynamics not observed in any other market, exhibiting daily, weekly and often annual seasonality and abrupt, short-lived and generally unanticipated price spikes. Extreme price volatility, which can be up to two orders of magnitude higher than that of any other commodity or financial asset, has forced market participants to hedge not only volume but also price risk. Price forecasts from a few hours to a few months ahead have become of particular interest to power portfolio managers. A power market company able to forecast the volatile wholesale prices with a reasonable level of accuracy can adjust its bidding strategy and its own production or consumption schedule in order to reduce the risk or maximize the profits in day-ahead trading. A ballpark estimate of savings from a 1% reduction in the mean absolute percentage error (MAPE) of short-term price forecasts is $300,000 per year for a utility with 1GW peak load. With the additional price forecasts, the savings double.
|
Electricity price forecasting : The simplest model for day ahead forecasting is to ask each generation source to bid on blocks of generation and choose the cheapest bids. If not enough bids are submitted, the price is increased. If too many bids are submitted the price can reach zero or become negative. The offer price includes the generation cost as well as the transmission cost, along with any profit. Power can be sold or purchased from adjoining power pools. The concept of independent system operators (ISOs) fosters competition for generation among wholesale market participants by unbundling the operation of transmission and generation. ISOs use bid-based markets to determine economic dispatch. Wind and solar power are non-dispatchable. Such power is normally sold before any other bids, at a predetermined rate for each supplier. Any excess is sold to another grid operator, or stored, using pumped-storage hydroelectricity, or in the worst case, curtailed. Curtailment could potentially significantly impact solar power's economic and environmental benefits at greater PV penetration levels. Allocation is done by bidding. The effect of the recent introduction of smart grids and integrating distributed renewable generation has been increased uncertainty of future supply, demand and prices. This uncertainty has driven much research into the topic of forecasting.
|
Electricity price forecasting : Electricity cannot be stored as easily as gas, it is produced at the exact moment of demand. All of the factors of supply and demand will, therefore, have an immediate impact on the price of electricity on the spot market. In addition to production costs, electricity prices are set by supply and demand. However, some fundamental drivers are the most likely to be considered. Short-term prices are impacted the most by the weather. Demand due to heating in the winter and cooling in the summer are the main drivers for seasonal price spikes. Additional natural-gas fired capacity is driving down the price of electricity and increasing demand. A country's natural resource endowment, as well as its regulations in place greatly influence tariffs from the supply side. The supply side of the electricity supply is most influenced by fuel prices, and CO2 allowance prices. The EU carbon prices have doubled since 2017, making it a significant driving factor of price.
|
Electricity price forecasting : A variety of methods and ideas have been tried for electricity price forecasting (EPF), with varying degrees of success. They can be broadly classified into six groups.
|
Electricity price forecasting : It is customary to talk about short-, medium- and long-term forecasting, but there is no consensus in the literature as to what the thresholds should actually be: Short-term forecasting generally involves horizons from a few minutes up to a few days ahead, and is of prime importance in day-to-day market operations. Medium-term forecasting, from a few days to a few months ahead, is generally preferred for balance sheet calculations, risk management and derivatives pricing. In many cases, especially in electricity price forecasting, evaluation is based not on the actual point forecasts, but on the distributions of prices over certain future time periods. As this type of modeling has a long-standing tradition in finance, an inflow of "finance solutions" is observed. Long-term forecasting, with lead times measured in months, quarters or even years, concentrates on investment profitability analysis and planning, such as determining the future sites or fuel sources of power plants.
|
Electricity price forecasting : In his extensive review paper, Weron looks ahead and speculates on the directions EPF will or should take over the next decade or so:
|
Electricity price forecasting : Energy forecasting Global Energy Forecasting Competitions == References ==
|
Controlled natural language : Controlled natural languages (CNLs) are subsets of natural languages that are obtained by restricting the grammar and vocabulary in order to reduce or eliminate ambiguity and complexity. Traditionally, controlled languages fall into two major types: those that improve readability for human readers (e.g. non-native speakers), and those that enable reliable automatic semantic analysis of the language. The first type of languages (often called "simplified" or "technical" languages), for example ASD Simplified Technical English, Caterpillar Technical English, IBM's Easy English, are used in the industry to increase the quality of technical documentation, and possibly simplify the semi-automatic translation of the documentation. These languages restrict the writer by general rules such as "Keep sentences short", "Avoid the use of pronouns", "Only use dictionary-approved words", and "Use only the active voice". The second type of languages have a formal syntax and formal semantics, and can be mapped to an existing formal language, such as first-order logic. Thus, those languages can be used as knowledge representation languages, and writing of those languages is supported by fully automatic consistency and redundancy checks, query answering, etc.
|
Controlled natural language : Existing controlled natural languages include:
|
Controlled natural language : IETF has reserved simple as a BCP 47 variant subtag for simplified versions of languages.
|
Controlled natural language : Controlled Natural Languages Archived 2021-03-08 at the Wayback Machine
|
Hugging Face : Hugging Face, Inc. is an American company incorporated under the Delaware General Corporation Law and based in New York City that develops computation tools for building applications using machine learning. It is most notable for its transformers library built for natural language processing applications and its platform that allows users to share machine learning models and datasets and showcase their work.
|
Hugging Face : The company was founded in 2016 by French entrepreneurs Clément Delangue, Julien Chaumond, and Thomas Wolf in New York City, originally as a company that developed a chatbot app targeted at teenagers. The company was named after the U+1F917 🤗 HUGGING FACE emoji. After open sourcing the model behind the chatbot, the company pivoted to focus on being a platform for machine learning. In March 2021, Hugging Face raised US$40 million in a Series B funding round. On April 28, 2021, the company launched the BigScience Research Workshop in collaboration with several other research groups to release an open large language model. In 2022, the workshop concluded with the announcement of BLOOM, a multilingual large language model with 176 billion parameters. In December 2022, the company acquired Gradio, an open source library built for developing machine learning applications in Python. On May 5, 2022, the company announced its Series C funding round led by Coatue and Sequoia. The company received a $2 billion valuation. On August 3, 2022, the company announced the Private Hub, an enterprise version of its public Hugging Face Hub that supports SaaS or on-premises deployment. In February 2023, the company announced partnership with Amazon Web Services (AWS) which would allow Hugging Face's products available to AWS customers to use them as the building blocks for their custom applications. The company also said the next generation of BLOOM will be run on Trainium, a proprietary machine learning chip created by AWS. In August 2023, the company announced that it raised $235 million in a Series D funding, at a $4.5 billion valuation. The funding was led by Salesforce, and notable participation came from Google, Amazon, Nvidia, AMD, Intel, IBM, and Qualcomm. In June 2024, the company announced, along with Meta and Scaleway, their launch of a new AI accelerator program for European startups. This initiative aims to help startups integrate open foundation models into their products, accelerating the EU AI ecosystem. The program, based at STATION F in Paris, will run from September 2024 to February 2025. Selected startups will receive mentoring, access to AI models and tools, and Scaleway’s computing power. On September 23, 2024, to further the International Decade of Indigenous Languages, Hugging Face teamed up with Meta and UNESCO to launch a new online language translator built on Meta's No Language Left Behind open-source AI model, enabling free text translation across 200 languages, including many low-resource languages.
|
Hugging Face : OpenAI Station F Kaggle
|
Hugging Face : Official website
|
Structural risk minimization : Structural risk minimization (SRM) is an inductive principle of use in machine learning. Commonly in machine learning, a generalized model must be selected from a finite data set, with the consequent problem of overfitting – the model becoming too strongly tailored to the particularities of the training set and generalizing poorly to new data. The SRM principle addresses this problem by balancing the model's complexity against its success at fitting the training data. This principle was first set out in a 1974 book by Vladimir Vapnik and Alexey Chervonenkis and uses the VC dimension. In practical terms, Structural Risk Minimization is implemented by minimizing E t r a i n + β H ( W ) +\beta H(W) , where E t r a i n is the train error, the function H ( W ) is called a regularization function, and β is a constant. H ( W ) is chosen such that it takes large values on parameters W that belong to high-capacity subsets of the parameter space. Minimizing H ( W ) in effect limits the capacity of the accessible subsets of the parameter space, thereby controlling the trade-off between minimizing the training error and minimizing the expected gap between the training error and test error. The SRM problem can be formulated in terms of data. Given n data points consisting of data x and labels y, the objective J ( θ ) is often expressed in the following manner: J ( θ ) = 1 2 n ∑ i = 1 n ( h θ ( x i ) − y i ) 2 + λ 2 ∑ j = 1 d θ j 2 \sum _^(h_(x^)-y^)^+\sum _^\theta _^ The first term is the mean squared error (MSE) term between the value of the learned model, h θ , and the given labels y . This term is the training error, E t r a i n , that was discussed earlier. The second term, places a prior over the weights, to favor sparsity and penalize larger weights. The trade-off coefficient, λ , is a hyperparameter that places more or less importance on the regularization term. Larger λ encourages sparser weights at the expense of a more optimal MSE, and smaller λ relaxes regularization allowing the model to fit to data. Note that as λ → ∞ the weights become zero, and as λ → 0 , the model typically suffers from overfitting.
|
Structural risk minimization : Vapnik–Chervonenkis theory Support vector machines Model selection Occam Learning Empirical risk minimization Ridge regression Regularization (mathematics)
|
Structural risk minimization : Structural risk minimization at the support vector machines website.
|
Toy problem : In scientific disciplines, a toy problem or a puzzlelike problem is a problem that is not of immediate scientific interest, yet is used as an expository device to illustrate a trait that may be shared by other, more complicated, instances of the problem, or as a way to explain a particular, more general, problem solving technique. A toy problem is useful to test and demonstrate methodologies. Researchers can use toy problems to compare the performance of different algorithms. They are also good for game designing. For instance, while engineering a large system, the large problem is often broken down into many smaller toy problems which have been well understood in detail. Often these problems distill a few important aspects of complicated problems so that they can be studied in isolation. Toy problems are thus often very useful in providing intuition about specific phenomena in more complicated problems. As an example, in the field of artificial intelligence, classical puzzles, games and problems are often used as toy problems. These include sliding-block puzzles, N-Queens problem, missionaries and cannibals problem, tic-tac-toe, chess, Tower of Hanoi and others.
|
Toy problem : Blocks world Firing squad synchronization problem Monkey and banana problem Secretary problem
|
Toy problem : "toy problem". The Jargon Lexicon.
|
INDIAai : INDIAai is a web portal launched by the Government of India in May 2022 for artificial intelligence-related developments in India. It is known as the National AI Portal of India, which was jointly started by the Ministry of Electronics and Information Technology (MeitY), the National e-Governance Division (NeGD) and the National Association of Software and Service Companies (NASSCOM) with support from the Department of School Education and Literacy (DoSE&L) and Ministry of Human Resource Development.
|
INDIAai : The portal was launched on 30 May 2020, by Ravi Shankar Prasad, the Union Minister for Electronics and IT, Law and Justice and Communications, on the first anniversary of the second tenure of Prime Minister Narendra Modi-led government. A national program for the youth, 'Responsible AI for Youth', was also launched on the same day. As of 2022, the website was visited by more than 4.5 lakh users with 1.2 million page views. It has 1151 articles on artificial intelligence, 701 news stories, 98 reports, 95 case studies and 213 videos on its portal. It maintains a database on AI ecosystem of India featuring 121 government initiatives and 281 startups. In May 2022, INDIAai released a book titled 'AI for Everyone' that covers the basics of AI. Cabinet chaired by the Prime Minister Narendra Modi has approved the comprehensive national-level IndiaAI mission with a budget outlay of Rs.10,371.92 crore. The Mission will be implemented by ‘IndiaAI’ Independent Business Division (IBD) under Digital India Corporation (DIC).
|
INDIAai : It aims to function as a one-stop portal for all AI-related development in India. The platform publishes resources such as articles, news, interviews, and investment funding news and events for AI startups, AI companies, and educational firms related to artificial intelligence in India. It also distributes documents, case studies, and research reports. Additionally, the platform provides education and employment opportunities related to AI. It offers AI courses, both free and paid.
|
INDIAai : Official website
|
Neural Turing machine : A neural Turing machine (NTM) is a recurrent neural network model of a Turing machine. The approach was published by Alex Graves et al. in 2014. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent. An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone. The authors of the original NTM paper did not publish their source code. The first stable open-source implementation was published in 2018 at the 27th International Conference on Artificial Neural Networks, receiving a best-paper award. Other open source implementations of NTMs exist but as of 2018 they are not sufficiently stable for production use. The developers either report that the gradients of their implementation sometimes become NaN during training for unknown reasons and cause training to fail; report slow convergence; or do not report the speed of learning of their implementation. Differentiable neural computers are an outgrowth of Neural Turing machines, with attention mechanisms that control where the memory is active, and improve performance. == References ==
|
Connectionist temporal classification : Connectionist temporal classification (CTC) is a type of neural network output and associated scoring function, for training recurrent neural networks (RNNs) such as LSTM networks to tackle sequence problems where the timing is variable. It can be used for tasks like on-line handwriting recognition or recognizing phonemes in speech audio. CTC refers to the outputs and scoring, and is independent of the underlying neural network structure. It was introduced in 2006. The input is a sequence of observations, and the outputs are a sequence of labels, which can include blank outputs. The difficulty of training comes from there being many more observations than there are labels. For example, in speech audio there can be multiple time slices which correspond to a single phoneme. Since we don't know the alignment of the observed sequence with the target labels we predict a probability distribution at each time step. A CTC network has a continuous output (e.g. softmax), which is fitted through training to model the probability of a label. CTC does not attempt to learn boundaries and timings: Label sequences are considered equivalent if they differ only in alignment, ignoring blanks. Equivalent label sequences can occur in many ways – which makes scoring a non-trivial task, but there is an efficient forward–backward algorithm for that. CTC scores can then be used with the back-propagation algorithm to update the neural network weights. Alternative approaches to a CTC-fitted neural network include a hidden Markov model (HMM). In 2009, a Connectionist Temporal Classification (CTC)-trained LSTM network was the first RNN to win pattern recognition contests when it won several competitions in connected handwriting recognition. In 2014, the Chinese company Baidu used CTC-trained RNNs to break the 2S09 Switchboard Hub5'00 speech recognition dataset benchmark without using any traditional speech processing methods. In 2015, it was used in Google voice search and dictation on Android devices.
|
Connectionist temporal classification : Section 16.4, "CTC" in Jurafsky and Martin's Speech and Language Processing, 3rd edition Hannun, Awni (27 November 2017). "Sequence Modeling with CTC". Distill. 2 (11): e8. doi:10.23915/distill.00008. ISSN 2476-0757.
|
Jpred : Jpred v.4 is the latest version of the JPred Protein Secondary Structure Prediction Server which provides predictions by the JNet algorithm, one of the most accurate methods for secondary structure prediction, that has existed since 1998 in different versions. In addition to protein secondary structure, JPred also makes predictions of solvent accessibility and coiled-coil regions. The JPred service runs up to 134 000 jobs per month and has carried out over 2 million predictions in total for users in 179 countries.
|
Jpred : The static HTML pages of JPred 2 are still available for reference.
|
Jpred : The JPred v3 followed on from previous versions of JPred developed and maintained by James Cuff and Jonathan Barber (see JPred References). This release added new functionality and fixed many bugs. The highlights are: New, friendlier user interface Retrained and optimised version of Jnet (v2) - mean secondary structure prediction accuracy of >81% Batch submission of jobs Better error checking of input sequences/alignments Predictions now (optionally) returned via e-mail Users may provide their own query names for each submission JPred now makes a prediction even when there are no PSI-BLAST hits to the query PS/PDF output now incorporates all the predictions
|
Jpred : The current version of JPred (v4) has the following improvements and updates incorporated: Retrained on the latest UniRef90 and SCOPe/ASTRAL version of Jnet (v2.3.1) - mean secondary structure prediction accuracy of >82%. Upgraded the Web Server to the latest technologies (Bootstrap framework, JavaScript) and updating the web pages – improving the design and usability through implementing responsive technologies. Added RESTful API and mass-submission and results retrieval scripts - resulting in peak throughput above 20,000 predictions per day. Added prediction jobs monitoring tools. Upgraded the results reporting – both, on the web-site, and through the optional email summary reports: improved batch submission, added results summary preview through Jalview results visualization summary in SVG and adding full multiple sequence alignments into the reports. Improved help-pages, incorporating tool-tips, and adding one-page step-by-step tutorials. Sequence residues are categorised or assigned to one of the secondary structure elements, such as alpha-helix, beta-sheet and coiled-coil. Jnet uses two neural networks for its prediction. The first network is fed with a window of 17 residues over each amino acid in the alignment plus a conservation number. It uses a hidden layer of nine nodes and has three output nodes, one for each secondary structure element. The second network is fed with a window of 19 residues (the result of first network) plus the conservation number. It has a hidden layer with nine nodes and has three output nodes.
|
Jpred : PSIPRED List of protein structure prediction software == References ==
|
Word embedding : In natural language processing, a word embedding is a representation of a word. The embedding is used in text analysis. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning. Word embeddings can be obtained using language modeling and feature learning techniques, where words or phrases from the vocabulary are mapped to vectors of real numbers. Methods to generate this mapping include neural networks, dimensionality reduction on the word co-occurrence matrix, probabilistic models, explainable knowledge base method, and explicit representation in terms of the context in which words appear. Word and phrase embeddings, when used as the underlying input representation, have been shown to boost the performance in NLP tasks such as syntactic parsing and sentiment analysis.
|
Word embedding : In distributional semantics, a quantitative methodological approach for understanding meaning in observed language, word embeddings or semantic feature space models have been used as a knowledge representation for some time. Such models aim to quantify and categorize semantic similarities between linguistic items based on their distributional properties in large samples of language data. The underlying idea that "a word is characterized by the company it keeps" was proposed in a 1957 article by John Rupert Firth, but also has roots in the contemporaneous work on search systems and in cognitive psychology. The notion of a semantic space with lexical items (words or multi-word terms) represented as vectors or embeddings is based on the computational challenges of capturing distributional characteristics and using them for practical application to measure similarity between words, phrases, or entire documents. The first generation of semantic space models is the vector space model for information retrieval. Such vector space models for words and their distributional data implemented in their simplest form results in a very sparse vector space of high dimensionality (cf. curse of dimensionality). Reducing the number of dimensions using linear algebraic methods such as singular value decomposition then led to the introduction of latent semantic analysis in the late 1980s and the random indexing approach for collecting word co-occurrence contexts. In 2000, Bengio et al. provided in a series of papers titled "Neural probabilistic language models" to reduce the high dimensionality of word representations in contexts by "learning a distributed representation for words". A study published in NeurIPS (NIPS) 2002 introduced the use of both word and document embeddings applying the method of kernel CCA to bilingual (and multi-lingual) corpora, also providing an early example of self-supervised learning of word embeddings. Word embeddings come in two different styles, one in which words are expressed as vectors of co-occurring words, and another in which words are expressed as vectors of linguistic contexts in which the words occur; these different styles are studied in Lavelli et al., 2004. Roweis and Saul published in Science how to use "locally linear embedding" (LLE) to discover representations of high dimensional data structures. Most new word embedding techniques after about 2005 rely on a neural network architecture instead of more probabilistic and algebraic models, after foundational work done by Yoshua Bengio and colleagues. The approach has been adopted by many research groups after theoretical advances in 2010 had been made on the quality of vectors and the training speed of the model, as well as after hardware advances allowed for a broader parameter space to be explored profitably. In 2013, a team at Google led by Tomas Mikolov created word2vec, a word embedding toolkit that can train vector space models faster than previous approaches. The word2vec approach has been widely used in experimentation and was instrumental in raising interest for word embeddings as a technology, moving the research strand out of specialised research into broader experimentation and eventually paving the way for practical application.
|
Word embedding : Historically, one of the main limitations of static word embeddings or word vector space models is that words with multiple meanings are conflated into a single representation (a single vector in the semantic space). In other words, polysemy and homonymy are not handled properly. For example, in the sentence "The club I tried yesterday was great!", it is not clear if the term club is related to the word sense of a club sandwich, clubhouse, golf club, or any other sense that club might have. The necessity to accommodate multiple meanings per word in different vectors (multi-sense embeddings) is the motivation for several contributions in NLP to split single-sense embeddings into multi-sense ones. Most approaches that produce multi-sense embeddings can be divided into two main categories for their word sense representation, i.e., unsupervised and knowledge-based. Based on word2vec skip-gram, Multi-Sense Skip-Gram (MSSG) performs word-sense discrimination and embedding simultaneously, improving its training time, while assuming a specific number of senses for each word. In the Non-Parametric Multi-Sense Skip-Gram (NP-MSSG) this number can vary depending on each word. Combining the prior knowledge of lexical databases (e.g., WordNet, ConceptNet, BabelNet), word embeddings and word sense disambiguation, Most Suitable Sense Annotation (MSSA) labels word-senses through an unsupervised and knowledge-based approach, considering a word's context in a pre-defined sliding window. Once the words are disambiguated, they can be used in a standard word embeddings technique, so multi-sense embeddings are produced. MSSA architecture allows the disambiguation and annotation process to be performed recurrently in a self-improving manner. The use of multi-sense embeddings is known to improve performance in several NLP tasks, such as part-of-speech tagging, semantic relation identification, semantic relatedness, named entity recognition and sentiment analysis. As of the late 2010s, contextually-meaningful embeddings such as ELMo and BERT have been developed. Unlike static word embeddings, these embeddings are at the token-level, in that each occurrence of a word has its own embedding. These embeddings better reflect the multi-sense nature of words, because occurrences of a word in similar contexts are situated in similar regions of BERT’s embedding space.
|
Word embedding : Word embeddings for n-grams in biological sequences (e.g. DNA, RNA, and Proteins) for bioinformatics applications have been proposed by Asgari and Mofrad. Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of deep learning in proteomics and genomics. The results presented by Asgari and Mofrad suggest that BioVectors can characterize biological sequences in terms of biochemical and biophysical interpretations of the underlying patterns.
|
Word embedding : Word embeddings with applications in game design have been proposed by Rabii and Cook as a way to discover emergent gameplay using logs of gameplay data. The process requires transcribing actions that occur during a game within a formal language and then using the resulting text to create word embeddings. The results presented by Rabii and Cook suggest that the resulting vectors can capture expert knowledge about games like chess that are not explicitly stated in the game's rules.
|
Word embedding : The idea has been extended to embeddings of entire sentences or even documents, e.g. in the form of the thought vectors concept. In 2015, some researchers suggested "skip-thought vectors" as a means to improve the quality of machine translation. A more recent and popular approach for representing sentences is Sentence-BERT, or SentenceTransformers, which modifies pre-trained BERT with the use of siamese and triplet network structures.
|
Word embedding : Software for training and using word embeddings includes Tomáš Mikolov's Word2vec, Stanford University's GloVe, GN-GloVe, Flair embeddings, AllenNLP's ELMo, BERT, fastText, Gensim, Indra, and Deeplearning4j. Principal Component Analysis (PCA) and T-Distributed Stochastic Neighbour Embedding (t-SNE) are both used to reduce the dimensionality of word vector spaces and visualize word embeddings and clusters.
|
Word embedding : Word embeddings may contain the biases and stereotypes contained in the trained dataset, as Bolukbasi et al. points out in the 2016 paper “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” that a publicly available (and popular) word2vec embedding trained on Google News texts (a commonly used data corpus), which consists of text written by professional journalists, still shows disproportionate word associations reflecting gender and racial biases when extracting word analogies. For example, one of the analogies generated using the aforementioned word embedding is “man is to computer programmer as woman is to homemaker”. Research done by Jieyu Zhou et al. shows that the applications of these trained word embeddings without careful oversight likely perpetuates existing bias in society, which is introduced through unaltered training data. Furthermore, word embeddings can even amplify these biases .
|
Word embedding : Embedding (machine learning) Brown clustering Distributional–relational database == References ==
|
Embodied agent : In artificial intelligence, an embodied agent, also sometimes referred to as an interface agent, is an intelligent agent that interacts with the environment through a physical body within that environment. Agents that are represented graphically with a body, for example a human or a cartoon animal, are also called embodied agents, although they have only virtual, not physical, embodiment. A branch of artificial intelligence focuses on empowering such agents to interact autonomously with human beings and the environment. Mobile robots are one example of physically embodied agents; Ananova and Microsoft Agent are examples of graphically embodied agents. Embodied conversational agents are embodied agents (usually with a graphical front-end as opposed to a robotic body) that are capable of engaging in conversation with one another and with humans employing the same verbal and nonverbal means that humans do (such as gesture, facial expression, and so forth).
|
Embodied agent : Embodied conversational agents are a form of intelligent user interface. Graphically embodied agents aim to unite gesture, facial expression and speech to enable face-to-face communication with users, providing a powerful means of human-computer interaction.
|
Embodied agent : Face-to-face communication allows communication protocols that give a much richer communication channel than other means of communicating. It enables pragmatic communication acts such as conversational turn-taking, facial expression of emotions, information structure and emphasis, visualisation and iconic gestures, and orientation in a three-dimensional environment. This communication takes place through both verbal and non-verbal channels such as gaze, gesture, spoken intonation and body posture. Research has found that users prefer a non-verbal visual indication of an embodied system's internal state to a verbal indication, demonstrating the value of additional non-verbal communication channels. As well as this, the face-to-face communication involved in interacting with an embodied agent can be conducted alongside another task without distracting the human participants, instead improving the enjoyment of such an interaction. Furthermore, the use of an embodied presentation agent results in improved recall of the presented information. Embodied agents also provide a social dimension to the interaction. Humans willingly ascribe social awareness to computers, and thus interaction with embodied agents follows social conventions, similar to human to human interactions. This social interaction both raises the believability and perceived trustworthiness of agents, and increases the user's engagement with the system. Rickenberg and Reeves found that the presence of an embodied agent on a website increased the level of user trust in that website, but also increased users' anxiety and affected their performance, as if they were being watched by a real human. Another effect of the social aspect of agents is that presentations given by an embodied agent are perceived as being more entertaining and less difficult than similar presentations given without an agent. Research shows that perceived enjoyment, followed by perceived usefulness and ease of use, is the major factor influencing user adoption of embodied agents. A study in January 2004 by Byron Reeves at Stanford demonstrated how digital characters could "enhance online experiences" through explaining how virtual characters essentially add a sense of relatability to the user experience and make it more approachable. This increase in likability in turn helps make the products better, which benefits both the end users and those creating the product.
|
Embodied agent : Bates, Joseph (1994), "The Role of Emotion in Believable Agents", Communications of the ACM, 37 (7): 122–125, CiteSeerX 10.1.1.47.8186, doi:10.1145/176789.176803, S2CID 207178664. Cassell, Justin (2000), "More than Just Another Pretty Face: Embodied Conversational Interface Agents" (PDF), Communications of the ACM, 43 (4): 70–78, doi:10.1145/332051.332075, S2CID 10691309. Ruebsamen, Gene (2002), Intelligent Agent, M.S. Thesis. California State University, Long Beach: U.S.A.
|
Embodied agent : "AI Makes Strides in Virtual Worlds More Like Our Own". Quanta Magazine. June 24, 2022.
|
Inception (deep learning architecture) : Inception is a family of convolutional neural network (CNN) for computer vision, introduced by researchers at Google in 2014 as GoogLeNet (later renamed Inception v1). The series was historically important as an early CNN that separates the stem (data ingest), body (data processing), and head (prediction), an architectural design that persists in all modern CNN.
|
Inception (deep learning architecture) : A list of all Inception models released by Google: "models/research/slim/README.md at master · tensorflow/models". GitHub. Retrieved 2024-10-19.
|
Kernel density estimation : In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on kernels as weights. KDE answers a fundamental data smoothing problem where inferences about the population are made based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier, which can improve its prediction accuracy.
|
Kernel density estimation : Let (x1, x2, ..., xn) be independent and identically distributed samples drawn from some univariate distribution with an unknown density f at any given point x. We are interested in estimating the shape of this function f. Its kernel density estimator is f ^ h ( x ) = 1 n ∑ i = 1 n K h ( x − x i ) = 1 n h ∑ i = 1 n K ( x − x i h ) , _(x)=\sum _^K_(x-x_)=\sum _^K\right), where K is the kernel — a non-negative function — and h > 0 is a smoothing parameter called the bandwidth or simply width. A kernel with subscript h is called the scaled kernel and defined as Kh(x) = 1/h K(x/h). Intuitively one wants to choose h as small as the data will allow; however, there is always a trade-off between the bias of the estimator and its variance. The choice of bandwidth is discussed in more detail below. A range of kernel functions are commonly used: uniform, triangular, biweight, triweight, Epanechnikov (parabolic), normal, and others. The Epanechnikov kernel is optimal in a mean square error sense, though the loss of efficiency is small for the kernels listed previously. Due to its convenient mathematical properties, the normal kernel is often used, which means K(x) = ϕ(x), where ϕ is the standard normal density function. The kernel density estimator then becomes f ^ h ( x ) = 1 n h σ 1 2 π ∑ i = 1 n exp ( − ( x − x i ) 2 2 h 2 σ 2 ) , _(x)=\sum _^\exp \left()^\sigma ^\right), where σ is the standard deviation of the sample x → . The construction of a kernel density estimate finds interpretations in fields outside of density estimation. For example, in thermodynamics, this is equivalent to the amount of heat generated when heat kernels (the fundamental solution to the heat equation) are placed at each data point locations xi. Similar methods are used to construct discrete Laplace operators on point clouds for manifold learning (e.g. diffusion map).
|
Kernel density estimation : Kernel density estimates are closely related to histograms, but can be endowed with properties such as smoothness or continuity by using a suitable kernel. The diagram below based on these 6 data points illustrates this relationship: For the histogram, first, the horizontal axis is divided into sub-intervals or bins which cover the range of the data: In this case, six bins each of width 2. Whenever a data point falls inside this interval, a box of height 1/12 is placed there. If more than one data point falls inside the same bin, the boxes are stacked on top of each other. For the kernel density estimate, normal kernels with a standard deviation of 1.5 (indicated by the red dashed lines) are placed on each of the data points xi. The kernels are summed to make the kernel density estimate (solid blue curve). The smoothness of the kernel density estimate (compared to the discreteness of the histogram) illustrates how kernel density estimates converge faster to the true underlying density for continuous random variables.
|
Kernel density estimation : The bandwidth of the kernel is a free parameter which exhibits a strong influence on the resulting estimate. To illustrate its effect, we take a simulated random sample from the standard normal distribution (plotted at the blue spikes in the rug plot on the horizontal axis). The grey curve is the true density (a normal density with mean 0 and variance 1). In comparison, the red curve is undersmoothed since it contains too many spurious data artifacts arising from using a bandwidth h = 0.05, which is too small. The green curve is oversmoothed since using the bandwidth h = 2 obscures much of the underlying structure. The black curve with a bandwidth of h = 0.337 is considered to be optimally smoothed since its density estimate is close to the true density. An extreme situation is encountered in the limit h → 0 (no smoothing), where the estimate is a sum of n delta functions centered at the coordinates of analyzed samples. In the other extreme limit h → ∞ the estimate retains the shape of the used kernel, centered on the mean of the samples (completely smooth). The most common optimality criterion used to select this parameter is the expected L2 risk function, also termed the mean integrated squared error: MISE ( h ) = E [ ∫ ( f ^ h ( x ) − f ( x ) ) 2 d x ] (h)=\operatorname \!\left[\int \!\!_(x)-f(x)\right)^dx\right] Under weak assumptions on f and K, (f is the, generally unknown, real density function), MISE ( h ) = AMISE ( h ) + o ( ( n h ) − 1 + h 4 ) (h)=\operatorname (h)++h^\right) where o is the little o notation, and n the sample size (as above). The AMISE is the asymptotic MISE, i. e. the two leading terms, AMISE ( h ) = R ( K ) n h + 1 4 m 2 ( K ) 2 h 4 R ( f ″ ) (h)=+m_(K)^h^R(f'') where R ( g ) = ∫ g ( x ) 2 d x \,dx for a function g, m 2 ( K ) = ∫ x 2 K ( x ) d x (K)=\int x^K(x)\,dx and f ″ is the second derivative of f and K is the kernel. The minimum of this AMISE is the solution to this differential equation ∂ ∂ h AMISE ( h ) = − R ( K ) n h 2 + m 2 ( K ) 2 h 3 R ( f ″ ) = 0 \operatorname (h)=-+m_(K)^h^R(f'')=0 or h AMISE = R ( K ) 1 / 5 m 2 ( K ) 2 / 5 R ( f ″ ) 1 / 5 n − 1 / 5 = C n − 1 / 5 =(K)^R(f'')^n^=Cn^ Neither the AMISE nor the hAMISE formulas can be used directly since they involve the unknown density function f or its second derivative f ″ . To overcome that difficulty, a variety of automatic, data-based methods have been developed to select the bandwidth. Several review studies have been undertaken to compare their efficacies, with the general consensus that the plug-in selectors and cross validation selectors are the most useful over a wide range of data sets. Substituting any bandwidth h which has the same asymptotic order n−1/5 as hAMISE into the AMISE gives that AMISE(h) = O(n−4/5), where O is the big O notation. It can be shown that, under weak assumptions, there cannot exist a non-parametric estimator that converges at a faster rate than the kernel estimator. Note that the n−4/5 rate is slower than the typical n−1 convergence rate of parametric methods. If the bandwidth is not held fixed, but is varied depending upon the location of either the estimate (balloon estimator) or the samples (pointwise estimator), this produces a particularly powerful method termed adaptive or variable bandwidth kernel density estimation. Bandwidth selection for kernel density estimation of heavy-tailed distributions is relatively difficult.
|
Kernel density estimation : Given the sample (x1, x2, ..., xn), it is natural to estimate the characteristic function φ(t) = E[eitX] as φ ^ ( t ) = 1 n ∑ j = 1 n e i t x j (t)=\sum _^e^ Knowing the characteristic function, it is possible to find the corresponding probability density function through the Fourier transform formula. One difficulty with applying this inversion formula is that it leads to a diverging integral, since the estimate φ ^ ( t ) (t) is unreliable for large t's. To circumvent this problem, the estimator φ ^ ( t ) (t) is multiplied by a damping function ψh(t) = ψ(ht), which is equal to 1 at the origin and then falls to 0 at infinity. The "bandwidth parameter" h controls how fast we try to dampen the function φ ^ ( t ) (t) . In particular when h is small, then ψh(t) will be approximately one for a large range of t's, which means that φ ^ ( t ) (t) remains practically unaltered in the most important region of t's. The most common choice for function ψ is either the uniform function ψ(t) = 1, which effectively means truncating the interval of integration in the inversion formula to [−1/h, 1/h], or the Gaussian function ψ(t) = e−πt2. Once the function ψ has been chosen, the inversion formula may be applied, and the density estimator will be f ^ ( x ) = 1 2 π ∫ − ∞ + ∞ φ ^ ( t ) ψ h ( t ) e − i t x d t = 1 2 π ∫ − ∞ + ∞ 1 n ∑ j = 1 n e i t ( x j − x ) ψ ( h t ) d t = 1 n h ∑ j = 1 n 1 2 π ∫ − ∞ + ∞ e − i ( h t ) x − x j h ψ ( h t ) d ( h t ) = 1 n h ∑ j = 1 n K ( x − x j h ) , (x)&=\int _^(t)\psi _(t)e^\,dt\\[1ex]&=\int _^\sum _^e^-x)\psi (ht)\,dt\\[1ex]&=\sum _^\int _^e^\psi (ht)\,d(ht)\\[1ex]&=\sum _^K\right),\end where K is the Fourier transform of the damping function ψ. Thus the kernel density estimator coincides with the characteristic function density estimator.
|
Kernel density estimation : We can extend the definition of the (global) mode to a local sense and define the local modes: M = (x)<0\ Namely, M is the collection of points for which the density function is locally maximized. A natural estimator of M is a plug-in from KDE, where g ( x ) and λ 1 ( x ) (x) are KDE version of g ( x ) and λ 1 ( x ) (x) . Under mild assumptions, M c is a consistent estimator of M . Note that one can use the mean shift algorithm to compute the estimator M c numerically.
|
Kernel density estimation : A non-exhaustive list of software implementations of kernel density estimators includes: In Analytica release 4.4, the Smoothing option for PDF results uses KDE, and from expressions it is available via the built-in Pdf function. In C/C++, FIGTree is a library that can be used to compute kernel density estimates using normal kernels. MATLAB interface available. In C++, libagf is a library for variable kernel density estimation. In C++, mlpack is a library that can compute KDE using many different kernels. It allows to set an error tolerance for faster computation. Python and R interfaces are available. in C# and F#, Math.NET Numerics is an open source library for numerical computation which includes kernel density estimation In CrimeStat, kernel density estimation is implemented using five different kernel functions – normal, uniform, quartic, negative exponential, and triangular. Both single- and dual-kernel density estimate routines are available. Kernel density estimation is also used in interpolating a Head Bang routine, in estimating a two-dimensional Journey-to-crime density function, and in estimating a three-dimensional Bayesian Journey-to-crime estimate. In ELKI, kernel density functions can be found in the package de.lmu.ifi.dbs.elki.math.statistics.kernelfunctions In ESRI products, kernel density mapping is managed out of the Spatial Analyst toolbox and uses the Quartic(biweight) kernel. In Excel, the Royal Society of Chemistry has created an add-in to run kernel density estimation based on their Analytical Methods Committee Technical Brief 4. In gnuplot, kernel density estimation is implemented by the smooth kdensity option, the datafile can contain a weight and bandwidth for each point, or the bandwidth can be set automatically according to "Silverman's rule of thumb" (see above). In Haskell, kernel density is implemented in the statistics package. In IGOR Pro, kernel density estimation is implemented by the StatsKDE operation (added in Igor Pro 7.00). Bandwidth can be user specified or estimated by means of Silverman, Scott or Bowmann and Azzalini. Kernel types are: Epanechnikov, Bi-weight, Tri-weight, Triangular, Gaussian and Rectangular. In Java, the Weka machine learning package provides weka.estimators.KernelEstimator, among others. In JavaScript, the visualization package D3.js offers a KDE package in its science.stats package. In JMP, the Graph Builder platform utilizes kernel density estimation to provide contour plots and high density regions (HDRs) for bivariate densities, and violin plots and HDRs for univariate densities. Sliders allow the user to vary the bandwidth. Bivariate and univariate kernel density estimates are also provided by the Fit Y by X and Distribution platforms, respectively. In Julia, kernel density estimation is implemented in the KernelDensity.jl package. In KNIME, 1D and 2D Kernel Density distributions can be generated and plotted using nodes from the Vernalis community contribution, e.g. 1D Kernel Density Plot, among others. The underlying implementation is written in Java. In MATLAB, kernel density estimation is implemented through the ksdensity function (Statistics Toolbox). As of the 2018a release of MATLAB, both the bandwidth and kernel smoother can be specified, including other options such as specifying the range of the kernel density. Alternatively, a free MATLAB software package which implements an automatic bandwidth selection method is available from the MATLAB Central File Exchange for 1-dimensional data 2-dimensional data n-dimensional data A free MATLAB toolbox with implementation of kernel regression, kernel density estimation, kernel estimation of hazard function and many others is available on these pages (this toolbox is a part of the book ). In Mathematica, numeric kernel density estimation is implemented by the function SmoothKernelDistribution and symbolic estimation is implemented using the function KernelMixtureDistribution both of which provide data-driven bandwidths. In Minitab, the Royal Society of Chemistry has created a macro to run kernel density estimation based on their Analytical Methods Committee Technical Brief 4. In the NAG Library, kernel density estimation is implemented via the g10ba routine (available in both the Fortran and the C versions of the Library). In Nuklei, C++ kernel density methods focus on data from the Special Euclidean group S E ( 3 ) . In Octave, kernel density estimation is implemented by the kernel_density option (econometrics package). In Origin, 2D kernel density plot can be made from its user interface, and two functions, Ksdensity for 1D and Ks2density for 2D can be used from its LabTalk, Python, or C code. In Perl, an implementation can be found in the Statistics-KernelEstimation module In PHP, an implementation can be found in the MathPHP library In Python, many implementations exist: pyqt_fit.kde Module in the PyQt-Fit package, SciPy (scipy.stats.gaussian_kde), Statsmodels (KDEUnivariate and KDEMultivariate), and scikit-learn (KernelDensity) (see comparison). KDEpy supports weighted data and its FFT implementation is orders of magnitude faster than the other implementations. The commonly used pandas library [1] offers support for kde plotting through the plot method (df.plot(kind='kde')[2]). The getdist package for weighted and correlated MCMC samples supports optimized bandwidth, boundary correction and higher-order methods for 1D and 2D distributions. One newly used package for kernel density estimation is seaborn ( import seaborn as sns , sns.kdeplot() ). A GPU implementation of KDE also exists. In R, it is implemented through density in the base distribution, and bw.nrd0 function is used in stats package, this function uses the optimized formula in Silverman's book. bkde in the KernSmooth library, ParetoDensityEstimation in the DataVisualizations library (for pareto distribution density estimation), kde in the ks library, dkden and dbckden in the evmix library (latter for boundary corrected kernel density estimation for bounded support), npudens in the np library (numeric and categorical data), sm.density in the sm library. For an implementation of the kde.R function, which does not require installing any packages or libraries, see kde.R. The btb library, dedicated to urban analysis, implements kernel density estimation through kernel_smoothing. In SAS, proc kde can be used to estimate univariate and bivariate kernel densities. In Apache Spark, the KernelDensity() class In Stata, it is implemented through kdensity; for example histogram x, kdensity. Alternatively a free Stata module KDENS is available allowing a user to estimate 1D or 2D density functions. In Swift, it is implemented through SwiftStats.KernelDensityEstimation in the open-source statistics library SwiftStats.
|
Kernel density estimation : Kernel (statistics) Kernel smoothing Kernel regression Density estimation (with presentation of other examples) Mean-shift Scale space: The triplets form a scale space representation of the data. Multivariate kernel density estimation Variable kernel density estimation Head/tail breaks
|
Kernel density estimation : Härdle, Wolfgang; Müller, Marlene; Sperlich, Stefan; Werwatz, Axel (2004). Nonparametric and Semiparametric Models. Springer Series in Statistics. Berlin Heidelberg: Springer-Verlag. pp. 39–83. ISBN 978-3-540-20722-1.
|
Kernel density estimation : Introduction to kernel density estimation A short tutorial which motivates kernel density estimators as an improvement over histograms. Kernel Bandwidth Optimization A free online tool that generates an optimized kernel density estimate. Free Online Software (Calculator) computes the Kernel Density Estimation for a data series according to the following Kernels: Gaussian, Epanechnikov, Rectangular, Triangular, Biweight, Cosine, and Optcosine. Kernel Density Estimation Applet An online interactive example of kernel density estimation. Requires .NET 3.0 or later.
|
Progress in artificial intelligence : Progress in artificial intelligence (AI) refers to the advances, milestones, and breakthroughs that have been achieved in the field of artificial intelligence over time. AI is a multidisciplinary branch of computer science that aims to create machines and systems capable of performing tasks that typically require human intelligence. AI applications have been used in a wide range of fields including medical diagnosis, finance, robotics, law, video games, agriculture, and scientific discovery. However, many AI applications are not perceived as AI: "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 1990s and early 2000s, AI technology became widely used as elements of larger systems, but the field was rarely credited for these successes at the time. Kaplan and Haenlein structure artificial intelligence along three evolutionary stages: Artificial narrow intelligence – AI capable only of specific tasks; Artificial general intelligence – AI with ability in several areas, and able to autonomously solve problems they were never even designed for; Artificial superintelligence – AI capable of general tasks, including scientific creativity, social skills, and general wisdom. To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject-matter expert Turing tests. Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results. Humans still substantially outperform both GPT-4 and models trained on the ConceptARC benchmark that scored 60% on most, and 77% on one category, while humans 91% on all and 97% on one category.
|
Progress in artificial intelligence : There are many useful abilities that can be described as showing some form of intelligence. This gives better insight into the comparative success of artificial intelligence in different areas. AI, like electricity or the steam engine, is a general-purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at. Some versions of Moravec's paradox observe that humans are more likely to outperform machines in areas such as physical dexterity that have been the direct target of natural selection. While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets. Researcher Andrew Ng has suggested, as a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI." Games provide a high-profile benchmark for assessing rates of progress; many games have a large professional player base and a well-established competitive rating system. AlphaGo brought the era of classical board-game benchmarks to a close when Artificial Intelligence proved their competitive edge over humans in 2016. Deep Mind's AlphaGo AI software program defeated the world's best professional Go Player Lee Sedol. Games of imperfect knowledge provide new challenges to AI in the area of game theory; the most prominent milestone in this area was brought to a close by Libratus' poker victory in 2017. E-sports continue to provide additional benchmarks; Facebook AI, Deepmind, and others have engaged with the popular StarCraft franchise of videogames. Broad classes of outcome for an AI test may be given as: optimal: it is not possible to perform better (note: some of these entries were solved by humans) super-human: performs better than all humans high-human: performs better than most humans par-human: performs similarly to most humans sub-human: performs worse than most humans
|
Progress in artificial intelligence : In his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis. The Turing test is now considered too exploitable to be a meaningful benchmark. The Feigenbaum test, proposed by the inventor of expert systems, tests a machine's knowledge and expertise about a specific subject. A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior. Proposed "universal intelligence" tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted by Kolmogorov complexity; however, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.