Polar{$T} complex number: \",\n z.r, \" e\", z.Θ, \" i\")\n```\nA `Polar` object will then display automatically using HTML in an environment that supports HTML display, but you can call `show` manually to get HTML output if you want:\n```julia-repl\njulia> show(stdout, \"text/html\", Polar(3.0,4.0))\nPolar{Float64} complex number: 3.0 e4.0 i\n```\nAn HTML renderer would display this as: `Polar{Float64}` complex number: 3.0 *e*4.0 *i*"}
{"text": "## [Custom pretty-printing](#man-custom-pretty-printing)\nAs a rule of thumb, the single-line `show` method should print a valid Julia expression for creating the shown object. When this `show` method contains infix operators, such as the multiplication operator (`*`) in our single-line `show` method for `Polar` above, it may not parse correctly when printed as part of another object. To see this, consider the expression object (see [Program representation](../metaprogramming/#Program-representation)) which takes the square of a specific instance of our `Polar` type:\n```julia-repl\njulia> a = Polar(3, 4.0)\nPolar{Float64} complex number:\n 3.0 * exp(4.0im)\n\njulia> print(:($a^2))\n3.0 * exp(4.0im) ^ 2\n```\nBecause the operator `^` has higher precedence than `*` (see [Operator Precedence and Associativity](../mathematical-operations/#Operator-Precedence-and-Associativity)), this output does not faithfully represent the expression `a ^ 2` which should be equal to `(3.0 * exp(4.0im)) ^ 2`. To solve this issue, we must make a custom method for `Base.show_unquoted(io::IO, z::Polar, indent::Int, precedence::Int)`, which is called internally by the expression object when printing:\n```julia-repl\njulia> function Base.show_unquoted(io::IO, z::Polar, ::Int, precedence::Int)\n if Base.operator_precedence(:*) <= precedence\n print(io, \"(\")\n show(io, z)\n print(io, \")\")\n else\n show(io, z)\n end\n end\n\njulia> :($a^2)\n:((3.0 * exp(4.0im)) ^ 2)\n```"}
{"text": "## [Custom pretty-printing](#man-custom-pretty-printing)\nThe method defined above adds parentheses around the call to `show` when the precedence of the calling operator is higher than or equal to the precedence of multiplication. This check allows expressions which parse correctly without the parentheses (such as `:($a + 2)` and `:($a == 2)`) to omit them when printing:\n```julia-repl\njulia> :($a + 2)\n:(3.0 * exp(4.0im) + 2)\n\njulia> :($a == 2)\n:(3.0 * exp(4.0im) == 2)\n```\nIn some cases, it is useful to adjust the behavior of `show` methods depending on the context. This can be achieved via the [`IOContext`](../../base/io-network/#Base.IOContext) type, which allows passing contextual properties together with a wrapped IO stream. For example, we can build a shorter representation in our `show` method when the `:compact` property is set to `true`, falling back to the long representation if the property is `false` or absent:\n```julia-repl\njulia> function Base.show(io::IO, z::Polar)\n if get(io, :compact, false)::Bool\n print(io, z.r, \"ℯ\", z.Θ, \"im\")\n else\n print(io, z.r, \" * exp(\", z.Θ, \"im)\")\n end\n end\n```\nThis new compact representation will be used when the passed IO stream is an `IOContext` object with the `:compact` property set. In particular, this is the case when printing arrays with multiple columns (where horizontal space is limited):"}
{"text": "## [Custom pretty-printing](#man-custom-pretty-printing)\n```julia-repl\njulia> show(IOContext(stdout, :compact=>true), Polar(3, 4.0))\n3.0ℯ4.0im\n\njulia> [Polar(3, 4.0) Polar(4.0,5.3)]\n1×2 Matrix{Polar{Float64}}:\n 3.0ℯ4.0im 4.0ℯ5.3im\n```\nSee the [`IOContext`](../../base/io-network/#Base.IOContext) documentation for a list of common properties which can be used to adjust printing."}
{"text": "## [\"Value types\"](#%22Value-types%22)\nIn Julia, you can't dispatch on a *value* such as `true` or `false`. However, you can dispatch on parametric types, and Julia allows you to include \"plain bits\" values (Types, Symbols, Integers, floating-point numbers, tuples, etc.) as type parameters. A common example is the dimensionality parameter in `Array{T,N}`, where `T` is a type (e.g., [`Float64`](../../base/numbers/#Core.Float64)) but `N` is just an `Int`.\nYou can create your own custom types that take values as parameters, and use them to control dispatch of custom types. By way of illustration of this idea, let's introduce the parametric type `Val{x}`, and its constructor `Val(x) = Val{x}()`, which serves as a customary way to exploit this technique for cases where you don't need a more elaborate hierarchy.\n[`Val`](../../base/base/#Base.Val) is defined as:\n```julia-repl\njulia> struct Val{x}\n end\n\njulia> Val(x) = Val{x}()\nVal\n```\nThere is no more to the implementation of `Val` than this. Some functions in Julia's standard library accept `Val` instances as arguments, and you can also use it to write your own functions. For example:\n```julia-repl\njulia> firstlast(::Val{true}) = \"First\"\nfirstlast (generic function with 1 method)\n\njulia> firstlast(::Val{false}) = \"Last\"\nfirstlast (generic function with 2 methods)\n\njulia> firstlast(Val(true))\n\"First\"\n\njulia> firstlast(Val(false))\n\"Last\"\n```\nFor consistency across Julia, the call site should always pass a `Val` *instance* rather than using a *type*, i.e., use `foo(Val(:bar))` rather than `foo(Val{:bar})`."}
{"text": "## [\"Value types\"](#%22Value-types%22)\nIt's worth noting that it's extremely easy to mis-use parametric \"value\" types, including `Val`; in unfavorable cases, you can easily end up making the performance of your code much *worse*. In particular, you would never want to write actual code as illustrated above. For more information about the proper (and improper) uses of `Val`, please read [the more extensive discussion in the performance tips](../performance-tips/#man-performance-value-type).\n- [1](#citeref-1)\"Small\" is defined by the `max_union_splitting` configuration, which currently defaults to 4.\n- [2](#citeref-2)A few popular languages have singleton types, including Haskell, Scala and Ruby.\n------------------------------------------------------------------------"}
{"text": "# Methods · The Julia Language\nSource: https://docs.julialang.org/en/v1/manual/methods/"}
{"text": "# [Methods](#Methods)\nRecall from [Functions](../functions/#man-functions) that a function is an object that maps a tuple of arguments to a return value, or throws an exception if no appropriate value can be returned. It is common for the same conceptual function or operation to be implemented quite differently for different types of arguments: adding two integers is very different from adding two floating-point numbers, both of which are distinct from adding an integer to a floating-point number. Despite their implementation differences, these operations all fall under the general concept of \"addition\". Accordingly, in Julia, these behaviors all belong to a single object: the `+` function."}
{"text": "# [Methods](#Methods)\nTo facilitate using many different implementations of the same concept smoothly, functions need not be defined all at once, but can rather be defined piecewise by providing specific behaviors for certain combinations of argument types and counts. A definition of one possible behavior for a function is called a *method*. Thus far, we have presented only examples of functions defined with a single method, applicable to all types of arguments. However, the signatures of method definitions can be annotated to indicate the types of arguments in addition to their number, and more than a single method definition may be provided. When a function is applied to a particular tuple of arguments, the most specific method applicable to those arguments is applied. Thus, the overall behavior of a function is a patchwork of the behaviors of its various method definitions. If the patchwork is well designed, even though the implementations of the methods may be quite different, the outward behavior of the function will appear seamless and consistent."}
{"text": "# [Methods](#Methods)\nThe choice of which method to execute when a function is applied is called *dispatch*. Julia allows the dispatch process to choose which of a function's methods to call based on the number of arguments given, and on the types of all of the function's arguments. This is different than traditional object-oriented languages, where dispatch occurs based only on the first argument, which often has a special argument syntax, and is sometimes implied rather than explicitly written as an argument. [[1\\]](#footnote-1) Using all of a function's arguments to choose which method should be invoked, rather than just the first, is known as [multiple dispatch](https://en.wikipedia.org/wiki/Multiple_dispatch). Multiple dispatch is particularly useful for mathematical code, where it makes little sense to artificially deem the operations to \"belong\" to one argument more than any of the others: does the addition operation in `x + y` belong to `x` any more than it does to `y`? The implementation of a mathematical operator generally depends on the types of all of its arguments. Even beyond mathematical operations, however, multiple dispatch ends up being a powerful and convenient paradigm for structuring and organizing programs.\nAll the examples in this chapter assume that you are defining methods for a function in the *same* module. If you want to add methods to a function in *another* module, you have to `import` it or use the name qualified with module names. See the section on [namespace management](../modules/#namespace-management)."}
{"text": "## [Defining Methods](#Defining-Methods)\nUntil now, we have, in our examples, defined only functions with a single method having unconstrained argument types. Such functions behave just like they would in traditional dynamically typed languages. Nevertheless, we have used multiple dispatch and methods almost continually without being aware of it: all of Julia's standard functions and operators, like the aforementioned `+` function, have many methods defining their behavior over various possible combinations of argument type and count.\nWhen defining a function, one can optionally constrain the types of parameters it is applicable to, using the `::` type-assertion operator, introduced in the section on [Composite Types](../types/#Composite-Types):\n```julia-repl\njulia> f(x::Float64, y::Float64) = 2x + y\nf (generic function with 1 method)\n```\nThis function definition applies only to calls where `x` and `y` are both values of type [`Float64`](../../base/numbers/#Core.Float64):\n```julia-repl\njulia> f(2.0, 3.0)\n7.0\n```\nApplying it to any other types of arguments will result in a [`MethodError`](../../base/base/#Core.MethodError):"}
{"text": "## [Defining Methods](#Defining-Methods)\n```julia-repl\njulia> f(2.0, 3)\nERROR: MethodError: no method matching f(::Float64, ::Int64)\nThe function `f` exists, but no method is defined for this combination of argument types.\n\nClosest candidates are:\n f(::Float64, !Matched::Float64)\n @ Main none:1\n\nStacktrace:\n[...]\n\njulia> f(Float32(2.0), 3.0)\nERROR: MethodError: no method matching f(::Float32, ::Float64)\nThe function `f` exists, but no method is defined for this combination of argument types.\n\nClosest candidates are:\n f(!Matched::Float64, ::Float64)\n @ Main none:1\n\nStacktrace:\n[...]\n\njulia> f(2.0, \"3.0\")\nERROR: MethodError: no method matching f(::Float64, ::String)\nThe function `f` exists, but no method is defined for this combination of argument types.\n\nClosest candidates are:\n f(::Float64, !Matched::Float64)\n @ Main none:1\n\nStacktrace:\n[...]\n\njulia> f(\"2.0\", \"3.0\")\nERROR: MethodError: no method matching f(::String, ::String)\nThe function `f` exists, but no method is defined for this combination of argument types.\n```\nAs you can see, the arguments must be precisely of type [`Float64`](../../base/numbers/#Core.Float64). Other numeric types, such as integers or 32-bit floating-point values, are not automatically converted to 64-bit floating-point, nor are strings parsed as numbers. Because `Float64` is a concrete type and concrete types cannot be subclassed in Julia, such a definition can only be applied to arguments that are exactly of type `Float64`. It may often be useful, however, to write more general methods where the declared parameter types are abstract:"}
{"text": "## [Defining Methods](#Defining-Methods)\n```julia-repl\njulia> f(x::Number, y::Number) = 2x - y\nf (generic function with 2 methods)\n\njulia> f(2.0, 3)\n1.0\n```\nThis method definition applies to any pair of arguments that are instances of [`Number`](../../base/numbers/#Core.Number). They need not be of the same type, so long as they are each numeric values. The problem of handling disparate numeric types is delegated to the arithmetic operations in the expression `2x - y`.\nTo define a function with multiple methods, one simply defines the function multiple times, with different numbers and types of arguments. The first method definition for a function creates the function object, and subsequent method definitions add new methods to the existing function object. The most specific method definition matching the number and types of the arguments will be executed when the function is applied. Thus, the two method definitions above, taken together, define the behavior for `f` over all pairs of instances of the abstract type `Number` – but with a different behavior specific to pairs of [`Float64`](../../base/numbers/#Core.Float64) values. If one of the arguments is a 64-bit float but the other one is not, then the `f(Float64,Float64)` method cannot be called and the more general `f(Number,Number)` method must be used:\n```julia-repl\njulia> f(2.0, 3.0)\n7.0\n\njulia> f(2, 3.0)\n1.0\n\njulia> f(2.0, 3)\n1.0\n\njulia> f(2, 3)\n1\n```"}
{"text": "## [Defining Methods](#Defining-Methods)\nThe `2x + y` definition is only used in the first case, while the `2x - y` definition is used in the others. No automatic casting or conversion of function arguments is ever performed: all conversion in Julia is non-magical and completely explicit. [Conversion and Promotion](../conversion-and-promotion/#conversion-and-promotion), however, shows how clever application of sufficiently advanced technology can be indistinguishable from magic. [[Clarke61\\]](#footnote-Clarke61)\nFor non-numeric values, and for fewer or more than two arguments, the function `f` remains undefined, and applying it will still result in a [`MethodError`](../../base/base/#Core.MethodError):\n```julia-repl\njulia> f(\"foo\", 3)\nERROR: MethodError: no method matching f(::String, ::Int64)\nThe function `f` exists, but no method is defined for this combination of argument types.\n\nClosest candidates are:\n f(!Matched::Number, ::Number)\n @ Main none:1\n f(!Matched::Float64, !Matched::Float64)\n @ Main none:1\n\nStacktrace:\n[...]\n\njulia> f()\nERROR: MethodError: no method matching f()\nThe function `f` exists, but no method is defined for this combination of argument types.\n\nClosest candidates are:\n f(!Matched::Float64, !Matched::Float64)\n @ Main none:1\n f(!Matched::Number, !Matched::Number)\n @ Main none:1\n\nStacktrace:\n[...]\n```\nYou can easily see which methods exist for a function by entering the function object itself in an interactive session:\n```julia-repl\njulia> f\nf (generic function with 2 methods)\n```"}
{"text": "## [Defining Methods](#Defining-Methods)\nThis output tells us that `f` is a function object with two methods. To find out what the signatures of those methods are, use the [`methods`](../../base/base/#Base.methods) function:\n```julia-repl\njulia> methods(f)\n# 2 methods for generic function \"f\" from Main:\n [1] f(x::Float64, y::Float64)\n @ none:1\n [2] f(x::Number, y::Number)\n @ none:1\n```\nwhich shows that `f` has two methods, one taking two `Float64` arguments and one taking arguments of type `Number`. It also indicates the file and line number where the methods were defined: because these methods were defined at the REPL, we get the apparent line number `none:1`.\nIn the absence of a type declaration with `::`, the type of a method parameter is `Any` by default, meaning that it is unconstrained since all values in Julia are instances of the abstract type `Any`. Thus, we can define a catch-all method for `f` like so:\n```julia-repl\njulia> f(x,y) = println(\"Whoa there, Nelly.\")\nf (generic function with 3 methods)\n\njulia> methods(f)\n# 3 methods for generic function \"f\" from Main:\n [1] f(x::Float64, y::Float64)\n @ none:1\n [2] f(x::Number, y::Number)\n @ none:1\n [3] f(x, y)\n @ none:1\n\njulia> f(\"foo\", 1)\nWhoa there, Nelly.\n```\nThis catch-all is less specific than any other possible method definition for a pair of parameter values, so it will only be called on pairs of arguments to which no other method definition applies."}
{"text": "## [Defining Methods](#Defining-Methods)\nNote that in the signature of the third method, there is no type specified for the arguments `x` and `y`. This is a shortened way of expressing `f(x::Any, y::Any)`.\nAlthough it seems a simple concept, multiple dispatch on the types of values is perhaps the single most powerful and central feature of the Julia language. Core operations typically have dozens of methods:\n```julia-repl\njulia> methods(+)\n# 180 methods for generic function \"+\":\n[1] +(x::Bool, z::Complex{Bool}) in Base at complex.jl:227\n[2] +(x::Bool, y::Bool) in Base at bool.jl:89\n[3] +(x::Bool) in Base at bool.jl:86\n[4] +(x::Bool, y::T) where T<:AbstractFloat in Base at bool.jl:96\n[5] +(x::Bool, z::Complex) in Base at complex.jl:234\n[6] +(a::Float16, b::Float16) in Base at float.jl:373\n[7] +(x::Float32, y::Float32) in Base at float.jl:375\n[8] +(x::Float64, y::Float64) in Base at float.jl:376\n[9] +(z::Complex{Bool}, x::Bool) in Base at complex.jl:228\n[10] +(z::Complex{Bool}, x::Real) in Base at complex.jl:242\n[11] +(x::Char, y::Integer) in Base at char.jl:40\n[12] +(c::BigInt, x::BigFloat) in Base.MPFR at mpfr.jl:307\n[13] +(a::BigInt, b::BigInt, c::BigInt, d::BigInt, e::BigInt) in Base.GMP at gmp.jl:392\n[14] +(a::BigInt, b::BigInt, c::BigInt, d::BigInt) in Base.GMP at gmp.jl:391\n[15] +(a::BigInt, b::BigInt, c::BigInt) in Base.GMP at gmp.jl:390\n[16] +(x::BigInt, y::BigInt) in Base.GMP at gmp.jl:361\n[17] +(x::BigInt, c::Union{UInt16, UInt32, UInt64, UInt8}) in Base.GMP at gmp.jl:398\n...\n[180] +(a, b, c, xs...) in Base at operators.jl:424\n```"}
{"text": "## [Defining Methods](#Defining-Methods)\nMultiple dispatch together with the flexible parametric type system give Julia its ability to abstractly express high-level algorithms decoupled from implementation details."}
{"text": "## [Method specializations](#man-method-specializations)\nWhen you create multiple methods of the same function, this is sometimes called \"specialization.\" In this case, you're specializing the *function* by adding additional methods to it: each new method is a new specialization of the function. As shown above, these specializations are returned by `methods`.\nThere's another kind of specialization that occurs without programmer intervention: Julia's compiler can automatically specialize the *method* for the specific argument types used. Such specializations are *not* listed by `methods`, as this doesn't create new `Method`s, but tools like [`@code_typed`](../../stdlib/InteractiveUtils/#InteractiveUtils.@code_typed) allow you to inspect such specializations.\nFor example, if you create a method\n```julia\nmysum(x::Real, y::Real) = x + y\n```\nyou've given the function `mysum` one new method (possibly its only method), and that method takes any pair of `Real` number inputs. But if you then execute\n```julia-repl\njulia> mysum(1, 2)\n3\n\njulia> mysum(1.0, 2.0)\n3.0\n```"}
{"text": "## [Method specializations](#man-method-specializations)\nJulia will compile `mysum` twice, once for `x::Int, y::Int` and again for `x::Float64, y::Float64`. The point of compiling twice is performance: the methods that get called for `+` (which `mysum` uses) vary depending on the specific types of `x` and `y`, and by compiling different specializations Julia can do all the method lookup ahead of time. This allows the program to run much more quickly, since it does not have to bother with method lookup while it is running. Julia's automatic specialization allows you to write generic algorithms and expect that the compiler will generate efficient, specialized code to handle each case you need.\nIn cases where the number of potential specializations might be effectively unlimited, Julia may avoid this default specialization. See [Be aware of when Julia avoids specializing](../performance-tips/#Be-aware-of-when-Julia-avoids-specializing) for more information."}
{"text": "## [Method Ambiguities](#man-ambiguities)\nIt is possible to define a set of function methods such that there is no unique most specific method applicable to some combinations of arguments:\n```julia-repl\njulia> g(x::Float64, y) = 2x + y\ng (generic function with 1 method)\n\njulia> g(x, y::Float64) = x + 2y\ng (generic function with 2 methods)\n\njulia> g(2.0, 3)\n7.0\n\njulia> g(2, 3.0)\n8.0\n\njulia> g(2.0, 3.0)\nERROR: MethodError: g(::Float64, ::Float64) is ambiguous.\n\nCandidates:\n g(x, y::Float64)\n @ Main none:1\n g(x::Float64, y)\n @ Main none:1\n\nPossible fix, define\n g(::Float64, ::Float64)\n\nStacktrace:\n[...]\n```\nHere the call `g(2.0, 3.0)` could be handled by either the `g(::Float64, ::Any)` or the `g(::Any, ::Float64)` method. The order in which the methods are defined does not matter and neither is more specific than the other. In such cases, Julia raises a [`MethodError`](../../base/base/#Core.MethodError) rather than arbitrarily picking a method. You can avoid method ambiguities by specifying an appropriate method for the intersection case:\n```julia-repl\njulia> g(x::Float64, y::Float64) = 2x + 2y\ng (generic function with 3 methods)\n\njulia> g(2.0, 3)\n7.0\n\njulia> g(2, 3.0)\n8.0\n\njulia> g(2.0, 3.0)\n10.0\n```\nIt is recommended that the disambiguating method be defined first, since otherwise the ambiguity exists, if transiently, until the more specific method is defined.\nIn more complex cases, resolving method ambiguities involves a certain element of design; this topic is explored further [below](#man-method-design-ambiguities)."}
{"text": "## [Parametric Methods](#Parametric-Methods)\nMethod definitions can optionally have type parameters qualifying the signature:\n```julia-repl\njulia> same_type(x::T, y::T) where {T} = true\nsame_type (generic function with 1 method)\n\njulia> same_type(x,y) = false\nsame_type (generic function with 2 methods)\n```\nThe first method applies whenever both arguments are of the same concrete type, regardless of what type that is, while the second method acts as a catch-all, covering all other cases. Thus, overall, this defines a boolean function that checks whether its two arguments are of the same type:\n```julia-repl\njulia> same_type(1, 2)\ntrue\n\njulia> same_type(1, 2.0)\nfalse\n\njulia> same_type(1.0, 2.0)\ntrue\n\njulia> same_type(\"foo\", 2.0)\nfalse\n\njulia> same_type(\"foo\", \"bar\")\ntrue\n\njulia> same_type(Int32(1), Int64(2))\nfalse\n```\nSuch definitions correspond to methods whose type signatures are `UnionAll` types (see [UnionAll Types](../types/#UnionAll-Types)).\nThis kind of definition of function behavior by dispatch is quite common – idiomatic, even – in Julia. Method type parameters are not restricted to being used as the types of arguments: they can be used anywhere a value would be in the signature of the function or body of the function. Here's an example where the method type parameter `T` is used as the type parameter to the parametric type `Vector{T}` in the method signature:\n```julia-repl\njulia> function myappend(v::Vector{T}, x::T) where {T}\n return [v..., x]\n end\nmyappend (generic function with 1 method)\n```"}
{"text": "## [Parametric Methods](#Parametric-Methods)\nThe type parameter `T` in this example ensures that the added element `x` is a subtype of the existing eltype of the vector `v`. The `where` keyword introduces a list of those constraints after the method signature definition. This works the same for one-line definitions, as seen above, and must appear *before* the [return type declaration](../functions/#man-functions-return-type), if present, as illustrated below:\n```julia-repl\njulia> (myappend(v::Vector{T}, x::T)::Vector) where {T} = [v..., x]\nmyappend (generic function with 1 method)\n\njulia> myappend([1,2,3],4)\n4-element Vector{Int64}:\n 1\n 2\n 3\n 4\n\njulia> myappend([1,2,3],2.5)\nERROR: MethodError: no method matching myappend(::Vector{Int64}, ::Float64)\nThe function `myappend` exists, but no method is defined for this combination of argument types.\n\nClosest candidates are:\n myappend(::Vector{T}, !Matched::T) where T\n @ Main none:1\n\nStacktrace:\n[...]\n\njulia> myappend([1.0,2.0,3.0],4.0)\n4-element Vector{Float64}:\n 1.0\n 2.0\n 3.0\n 4.0\n\njulia> myappend([1.0,2.0,3.0],4)\nERROR: MethodError: no method matching myappend(::Vector{Float64}, ::Int64)\nThe function `myappend` exists, but no method is defined for this combination of argument types.\n\nClosest candidates are:\n myappend(::Vector{T}, !Matched::T) where T\n @ Main none:1\n\nStacktrace:\n[...]\n```"}
{"text": "## [Parametric Methods](#Parametric-Methods)\nIf the type of the appended element does not match the element type of the vector it is appended to, a [`MethodError`](../../base/base/#Core.MethodError) is raised. In the following example, the method's type parameter `T` is used as the return value:\n```julia-repl\njulia> mytypeof(x::T) where {T} = T\nmytypeof (generic function with 1 method)\n\njulia> mytypeof(1)\nInt64\n\njulia> mytypeof(1.0)\nFloat64\n```\nJust as you can put subtype constraints on type parameters in type declarations (see [Parametric Types](../types/#Parametric-Types)), you can also constrain type parameters of methods:"}
{"text": "## [Parametric Methods](#Parametric-Methods)\n```julia-repl\njulia> same_type_numeric(x::T, y::T) where {T<:Number} = true\nsame_type_numeric (generic function with 1 method)\n\njulia> same_type_numeric(x::Number, y::Number) = false\nsame_type_numeric (generic function with 2 methods)\n\njulia> same_type_numeric(1, 2)\ntrue\n\njulia> same_type_numeric(1, 2.0)\nfalse\n\njulia> same_type_numeric(1.0, 2.0)\ntrue\n\njulia> same_type_numeric(\"foo\", 2.0)\nERROR: MethodError: no method matching same_type_numeric(::String, ::Float64)\nThe function `same_type_numeric` exists, but no method is defined for this combination of argument types.\n\nClosest candidates are:\n same_type_numeric(!Matched::T, ::T) where T<:Number\n @ Main none:1\n same_type_numeric(!Matched::Number, ::Number)\n @ Main none:1\n\nStacktrace:\n[...]\n\njulia> same_type_numeric(\"foo\", \"bar\")\nERROR: MethodError: no method matching same_type_numeric(::String, ::String)\nThe function `same_type_numeric` exists, but no method is defined for this combination of argument types.\n\njulia> same_type_numeric(Int32(1), Int64(2))\nfalse\n```\nThe `same_type_numeric` function behaves much like the `same_type` function defined above, but is only defined for pairs of numbers."}
{"text": "## [Parametric Methods](#Parametric-Methods)\nParametric methods allow the same syntax as `where` expressions used to write types (see [UnionAll Types](../types/#UnionAll-Types)). If there is only a single parameter, the enclosing curly braces (in `where {T}`) can be omitted, but are often preferred for clarity. Multiple parameters can be separated with commas, e.g. `where {T, S<:Real}`, or written using nested `where`, e.g. `where S<:Real where T`."}
{"text": "## [Redefining Methods](#Redefining-Methods)\nWhen redefining a method or adding new methods, it is important to realize that these changes don't take effect immediately. This is key to Julia's ability to statically infer and compile code to run fast, without the usual JIT tricks and overhead. Indeed, any new method definition won't be visible to the current runtime environment, including Tasks and Threads (and any previously defined `@generated` functions). Let's start with an example to see what this means:\n```julia-repl\njulia> function tryeval()\n @eval newfun() = 1\n newfun()\n end\ntryeval (generic function with 1 method)\n\njulia> tryeval()\nERROR: MethodError: no method matching newfun()\nThe applicable method may be too new: running in world age xxxx1, while current world is xxxx2.\nClosest candidates are:\n newfun() at none:1 (method too new to be called from this world context.)\n in tryeval() at none:1\n ...\n\njulia> newfun()\n1\n```\nIn this example, observe that the new definition for `newfun` has been created, but can't be immediately called. The new global is immediately visible to the `tryeval` function, so you could write `return newfun` (without parentheses). But neither you, nor any of your callers, nor the functions they call, or etc. can call this new method definition!\nBut there's an exception: future calls to `newfun` *from the REPL* work as expected, being able to both see and call the new definition of `newfun`."}
{"text": "## [Redefining Methods](#Redefining-Methods)\nHowever, future calls to `tryeval` will continue to see the definition of `newfun` as it was *at the previous statement at the REPL*, and thus before that call to `tryeval`.\nYou may want to try this for yourself to see how it works.\nThe implementation of this behavior is a \"world age counter\". This monotonically increasing value tracks each method definition operation. This allows describing \"the set of method definitions visible to a given runtime environment\" as a single number, or \"world age\". It also allows comparing the methods available in two worlds just by comparing their ordinal value. In the example above, we see that the \"current world\" (in which the method `newfun` exists), is one greater than the task-local \"runtime world\" that was fixed when the execution of `tryeval` started.\nSometimes it is necessary to get around this (for example, if you are implementing the above REPL). Fortunately, there is an easy solution: call the function using [`Base.invokelatest`](../../base/base/#Base.invokelatest):\n```julia-repl\njulia> function tryeval2()\n @eval newfun2() = 2\n Base.invokelatest(newfun2)\n end\ntryeval2 (generic function with 1 method)\n\njulia> tryeval2()\n2\n```\nFinally, let's take a look at some more complex examples where this rule comes into play. Define a function `f(x)`, which initially has one method:\n```julia-repl\njulia> f(x) = \"original definition\"\nf (generic function with 1 method)\n```\nStart some other operations that use `f(x)`:"}
{"text": "## [Redefining Methods](#Redefining-Methods)\n```julia-repl\njulia> g(x) = f(x)\ng (generic function with 1 method)\n\njulia> t = @async f(wait()); yield();\n```\nNow we add some new methods to `f(x)`:\n```julia-repl\njulia> f(x::Int) = \"definition for Int\"\nf (generic function with 2 methods)\n\njulia> f(x::Type{Int}) = \"definition for Type{Int}\"\nf (generic function with 3 methods)\n```\nCompare how these results differ:\n```julia-repl\njulia> f(1)\n\"definition for Int\"\n\njulia> g(1)\n\"definition for Int\"\n\njulia> fetch(schedule(t, 1))\n\"original definition\"\n\njulia> t = @async f(wait()); yield();\n\njulia> fetch(schedule(t, 1))\n\"definition for Int\"\n```"}
{"text": "## [Design Patterns with Parametric Methods](#Design-Patterns-with-Parametric-Methods)\nWhile complex dispatch logic is not required for performance or usability, sometimes it can be the best way to express some algorithm. Here are a few common design patterns that come up sometimes when using dispatch in this way."}
{"text": "### [Extracting the type parameter from a super-type](#Extracting-the-type-parameter-from-a-super-type)\nHere is a correct code template for returning the element-type `T` of any arbitrary subtype of `AbstractArray` that has well-defined element type:\n```julia\nabstract type AbstractArray{T, N} end\neltype(::Type{<:AbstractArray{T}}) where {T} = T\n```\nusing so-called triangular dispatch. Note that `UnionAll` types, for example `eltype(AbstractArray{T} where T <: Integer)`, do not match the above method. The implementation of `eltype` in `Base` adds a fallback method to `Any` for such cases.\nOne common mistake is to try and get the element-type by using introspection:\n```julia\neltype_wrong(::Type{A}) where {A<:AbstractArray} = A.parameters[1]\n```\nHowever, it is not hard to construct cases where this will fail:\n```julia\nstruct BitVector <: AbstractArray{Bool, 1}; end\n```\nHere we have created a type `BitVector` which has no parameters, but where the element-type is still fully specified, with `T` equal to `Bool`!\nAnother mistake is to try to walk up the type hierarchy using `supertype`:\n```julia\neltype_wrong(::Type{AbstractArray{T}}) where {T} = T\neltype_wrong(::Type{AbstractArray{T, N}}) where {T, N} = T\neltype_wrong(::Type{A}) where {A<:AbstractArray} = eltype_wrong(supertype(A))\n```\nWhile this works for declared types, it fails for types without supertypes:"}
{"text": "### [Extracting the type parameter from a super-type](#Extracting-the-type-parameter-from-a-super-type)\n```julia-repl\njulia> eltype_wrong(Union{AbstractArray{Int}, AbstractArray{Float64}})\nERROR: MethodError: no method matching supertype(::Type{Union{AbstractArray{Float64,N} where N, AbstractArray{Int64,N} where N}})\nClosest candidates are:\n supertype(::DataType) at operators.jl:43\n supertype(::UnionAll) at operators.jl:48\n```"}
{"text": "### [Building a similar type with a different type parameter](#Building-a-similar-type-with-a-different-type-parameter)\nWhen building generic code, there is often a need for constructing a similar object with some change made to the layout of the type, also necessitating a change of the type parameters. For instance, you might have some sort of abstract array with an arbitrary element type and want to write your computation on it with a specific element type. We must implement a method for each `AbstractArray{T}` subtype that describes how to compute this type transform. There is no general transform of one subtype into another subtype with a different parameter.\nThe subtypes of `AbstractArray` typically implement two methods to achieve this: A method to convert the input array to a subtype of a specific `AbstractArray{T, N}` abstract type; and a method to make a new uninitialized array with a specific element type. Sample implementations of these can be found in Julia Base. Here is a basic example usage of them, guaranteeing that `input` and `output` are of the same type:\n```julia\ninput = convert(AbstractArray{Eltype}, input)\noutput = similar(input, Eltype)\n```"}
{"text": "### [Building a similar type with a different type parameter](#Building-a-similar-type-with-a-different-type-parameter)\nAs an extension of this, in cases where the algorithm needs a copy of the input array, [`convert`](../../base/base/#Base.convert) is insufficient as the return value may alias the original input. Combining [`similar`](../../base/arrays/#Base.similar) (to make the output array) and [`copyto!`](../../base/c/#Base.copyto!) (to fill it with the input data) is a generic way to express the requirement for a mutable copy of the input argument:\n```julia\ncopy_with_eltype(input, Eltype) = copyto!(similar(input, Eltype), input)\n```"}
{"text": "### [Iterated dispatch](#Iterated-dispatch)\nIn order to dispatch a multi-level parametric argument list, often it is best to separate each level of dispatch into distinct functions. This may sound similar in approach to single-dispatch, but as we shall see below, it is still more flexible.\nFor example, trying to dispatch on the element-type of an array will often run into ambiguous situations. Instead, commonly code will dispatch first on the container type, then recurse down to a more specific method based on eltype. In most cases, the algorithms lend themselves conveniently to this hierarchical approach, while in other cases, this rigor must be resolved manually. This dispatching branching can be observed, for example, in the logic to sum two matrices:\n```julia\n# First dispatch selects the map algorithm for element-wise summation.\n+(a::Matrix, b::Matrix) = map(+, a, b)\n# Then dispatch handles each element and selects the appropriate\n# common element type for the computation.\n+(a, b) = +(promote(a, b)...)\n# Once the elements have the same type, they can be added.\n# For example, via primitive operations exposed by the processor.\n+(a::Float64, b::Float64) = Core.add(a, b)\n```"}
{"text": "### [Trait-based dispatch](#Trait-based-dispatch)\nA natural extension to the iterated dispatch above is to add a layer to method selection that allows to dispatch on sets of types which are independent from the sets defined by the type hierarchy. We could construct such a set by writing out a `Union` of the types in question, but then this set would not be extensible as `Union`-types cannot be altered after creation. However, such an extensible set can be programmed with a design pattern often referred to as a [\"Holy-trait\"](https://github.com/JuliaLang/julia/issues/2345#issuecomment-54537633).\nThis pattern is implemented by defining a generic function which computes a different singleton value (or type) for each trait-set to which the function arguments may belong to. If this function is pure there is no impact on performance compared to normal dispatch."}
{"text": "### [Trait-based dispatch](#Trait-based-dispatch)\nThe example in the previous section glossed over the implementation details of [`map`](../../base/collections/#Base.map) and [`promote`](../../base/base/#Base.promote), which both operate in terms of these traits. When iterating over a matrix, such as in the implementation of `map`, one important question is what order to use to traverse the data. When `AbstractArray` subtypes implement the [`Base.IndexStyle`](../../base/arrays/#Base.IndexStyle) trait, other functions such as `map` can dispatch on this information to pick the best algorithm (see [Abstract Array Interface](../interfaces/#man-interface-array)). This means that each subtype does not need to implement a custom version of `map`, since the generic definitions + trait classes will enable the system to select the fastest version. Here is a toy implementation of `map` illustrating the trait-based dispatch:\n```julia\nmap(f, a::AbstractArray, b::AbstractArray) = map(Base.IndexStyle(a, b), f, a, b)\n# generic implementation:\nmap(::Base.IndexCartesian, f, a::AbstractArray, b::AbstractArray) = ...\n# linear-indexing implementation (faster)\nmap(::Base.IndexLinear, f, a::AbstractArray, b::AbstractArray) = ...\n```"}
{"text": "### [Trait-based dispatch](#Trait-based-dispatch)\nThis trait-based approach is also present in the [`promote`](../../base/base/#Base.promote) mechanism employed by the scalar `+`. It uses [`promote_type`](../../base/base/#Base.promote_type), which returns the optimal common type to compute the operation given the two types of the operands. This makes it possible to reduce the problem of implementing every function for every pair of possible type arguments, to the much smaller problem of implementing a conversion operation from each type to a common type, plus a table of preferred pair-wise promotion rules."}
{"text": "### [Output-type computation](#Output-type-computation)\nThe discussion of trait-based promotion provides a transition into our next design pattern: computing the output element type for a matrix operation.\nFor implementing primitive operations, such as addition, we use the [`promote_type`](../../base/base/#Base.promote_type) function to compute the desired output type. (As before, we saw this at work in the `promote` call in the call to `+`).\nFor more complex functions on matrices, it may be necessary to compute the expected return type for a more complex sequence of operations. This is often performed by the following steps:\n1. Write a small function `op` that expresses the set of operations performed by the kernel of the algorithm.\n2. Compute the element type `R` of the result matrix as `promote_op(op, argument_types...)`, where `argument_types` is computed from `eltype` applied to each input array.\n3. Build the output matrix as `similar(R, dims)`, where `dims` are the desired dimensions of the output array.\nFor a more specific example, a generic square-matrix multiply pseudo-code might look like:"}
{"text": "### [Output-type computation](#Output-type-computation)\n```julia\nfunction matmul(a::AbstractMatrix, b::AbstractMatrix)\n op = (ai, bi) -> ai * bi + ai * bi\n\n ## this is insufficient because it assumes `one(eltype(a))` is constructable:\n # R = typeof(op(one(eltype(a)), one(eltype(b))))\n\n ## this fails because it assumes `a[1]` exists and is representative of all elements of the array\n # R = typeof(op(a[1], b[1]))\n\n ## this is incorrect because it assumes that `+` calls `promote_type`\n ## but this is not true for some types, such as Bool:\n # R = promote_type(ai, bi)\n\n # this is wrong, since depending on the return value\n # of type-inference is very brittle (as well as not being optimizable):\n # R = Base.return_types(op, (eltype(a), eltype(b)))\n\n ## but, finally, this works:\n R = promote_op(op, eltype(a), eltype(b))\n ## although sometimes it may give a larger type than desired\n ## it will always give a correct type\n\n output = similar(b, R, (size(a, 1), size(b, 2)))\n if size(a, 2) > 0\n for j in 1:size(b, 2)\n for i in 1:size(a, 1)\n ## here we don't use `ab = zero(R)`,\n ## since `R` might be `Any` and `zero(Any)` is not defined\n ## we also must declare `ab::R` to make the type of `ab` constant in the loop,\n ## since it is possible that typeof(a * b) != typeof(a * b + a * b) == R\n ab::R = a[i, 1] * b[1, j]\n for k in 2:size(a, 2)\n ab += a[i, k] * b[k, j]\n end\n output[i, j] = ab\n end\n end\n end\n return output\nend\n```"}
{"text": "### [Separate convert and kernel logic](#Separate-convert-and-kernel-logic)\nOne way to significantly cut down on compile-times and testing complexity is to isolate the logic for converting to the desired type and the computation. This lets the compiler specialize and inline the conversion logic independent from the rest of the body of the larger kernel.\nThis is a common pattern seen when converting from a larger class of types to the one specific argument type that is actually supported by the algorithm:\n```julia\ncomplexfunction(arg::Int) = ...\ncomplexfunction(arg::Any) = complexfunction(convert(Int, arg))\n\nmatmul(a::T, b::T) = ...\nmatmul(a, b) = matmul(promote(a, b)...)\n```"}
{"text": "## [Parametrically-constrained Varargs methods](#Parametrically-constrained-Varargs-methods)\nFunction parameters can also be used to constrain the number of arguments that may be supplied to a \"varargs\" function ([Varargs Functions](../functions/#Varargs-Functions)). The notation `Vararg{T,N}` is used to indicate such a constraint. For example:\n```julia-repl\njulia> bar(a,b,x::Vararg{Any,2}) = (a,b,x)\nbar (generic function with 1 method)\n\njulia> bar(1,2,3)\nERROR: MethodError: no method matching bar(::Int64, ::Int64, ::Int64)\nThe function `bar` exists, but no method is defined for this combination of argument types.\n\nClosest candidates are:\n bar(::Any, ::Any, ::Any, !Matched::Any)\n @ Main none:1\n\nStacktrace:\n[...]\n\njulia> bar(1,2,3,4)\n(1, 2, (3, 4))\n\njulia> bar(1,2,3,4,5)\nERROR: MethodError: no method matching bar(::Int64, ::Int64, ::Int64, ::Int64, ::Int64)\nThe function `bar` exists, but no method is defined for this combination of argument types.\n\nClosest candidates are:\n bar(::Any, ::Any, ::Any, ::Any)\n @ Main none:1\n\nStacktrace:\n[...]\n```\nMore usefully, it is possible to constrain varargs methods by a parameter. For example:\n```julia\nfunction getindex(A::AbstractArray{T,N}, indices::Vararg{Number,N}) where {T,N}\n```\nwould be called only when the number of `indices` matches the dimensionality of the array.\nWhen only the type of supplied arguments needs to be constrained `Vararg{T}` can be equivalently written as `T...`. For instance `f(x::Int...) = x` is a shorthand for `f(x::Vararg{Int}) = x`."}
{"text": "## [Note on Optional and keyword Arguments](#Note-on-Optional-and-keyword-Arguments)\nAs mentioned briefly in [Functions](../functions/#man-functions), optional arguments are implemented as syntax for multiple method definitions. For example, this definition:\n```julia\nf(a=1,b=2) = a+2b\n```\ntranslates to the following three methods:\n```julia\nf(a,b) = a+2b\nf(a) = f(a,2)\nf() = f(1,2)\n```\nThis means that calling `f()` is equivalent to calling `f(1,2)`. In this case the result is `5`, because `f(1,2)` invokes the first method of `f` above. However, this need not always be the case. If you define a fourth method that is more specialized for integers:\n```julia\nf(a::Int,b::Int) = a-2b\n```\nthen the result of both `f()` and `f(1,2)` is `-3`. In other words, optional arguments are tied to a function, not to any specific method of that function. It depends on the types of the optional arguments which method is invoked. When optional arguments are defined in terms of a global variable, the type of the optional argument may even change at run-time.\nKeyword arguments behave quite differently from ordinary positional arguments. In particular, they do not participate in method dispatch. Methods are dispatched based only on positional arguments, with keyword arguments processed after the matching method is identified."}
{"text": "## [Function-like objects](#Function-like-objects)\nMethods are associated with types, so it is possible to make any arbitrary Julia object \"callable\" by adding methods to its type. (Such \"callable\" objects are sometimes called \"functors.\")\nFor example, you can define a type that stores the coefficients of a polynomial, but behaves like a function evaluating the polynomial:\n```julia-repl\njulia> struct Polynomial{R}\n coeffs::Vector{R}\n end\n\njulia> function (p::Polynomial)(x)\n v = p.coeffs[end]\n for i = (length(p.coeffs)-1):-1:1\n v = v*x + p.coeffs[i]\n end\n return v\n end\n\njulia> (p::Polynomial)() = p(5)\n```\nNotice that the function is specified by type instead of by name. As with normal functions there is a terse syntax form. In the function body, `p` will refer to the object that was called. A `Polynomial` can be used as follows:\n```julia-repl\njulia> p = Polynomial([1,10,100])\nPolynomial{Int64}([1, 10, 100])\n\njulia> p(3)\n931\n\njulia> p()\n2551\n```\nThis mechanism is also the key to how type constructors and closures (inner functions that refer to their surrounding environment) work in Julia."}
{"text": "## [Empty generic functions](#Empty-generic-functions)\nOccasionally it is useful to introduce a generic function without yet adding methods. This can be used to separate interface definitions from implementations. It might also be done for the purpose of documentation or code readability. The syntax for this is an empty `function` block without a tuple of arguments:\n```julia\nfunction emptyfunc end\n```"}
{"text": "## [Method design and the avoidance of ambiguities](#man-method-design-ambiguities)\nJulia's method polymorphism is one of its most powerful features, yet exploiting this power can pose design challenges. In particular, in more complex method hierarchies it is not uncommon for [ambiguities](#man-ambiguities) to arise.\nAbove, it was pointed out that one can resolve ambiguities like\n```julia\nf(x, y::Int) = 1\nf(x::Int, y) = 2\n```\nby defining a method\n```julia\nf(x::Int, y::Int) = 3\n```\nThis is often the right strategy; however, there are circumstances where following this advice mindlessly can be counterproductive. In particular, the more methods a generic function has, the more possibilities there are for ambiguities. When your method hierarchies get more complicated than this simple example, it can be worth your while to think carefully about alternative strategies.\nBelow we discuss particular challenges and some alternative ways to resolve such issues."}
{"text": "### [Tuple and NTuple arguments](#Tuple-and-NTuple-arguments)\n`Tuple` (and `NTuple`) arguments present special challenges. For example,\n```julia\nf(x::NTuple{N,Int}) where {N} = 1\nf(x::NTuple{N,Float64}) where {N} = 2\n```\nare ambiguous because of the possibility that `N == 0`: there are no elements to determine whether the `Int` or `Float64` variant should be called. To resolve the ambiguity, one approach is define a method for the empty tuple:\n```julia\nf(x::Tuple{}) = 3\n```\nAlternatively, for all methods but one you can insist that there is at least one element in the tuple:\n```julia\nf(x::NTuple{N,Int}) where {N} = 1 # this is the fallback\nf(x::Tuple{Float64, Vararg{Float64}}) = 2 # this requires at least one Float64\n```"}
{"text": "### [Orthogonalize your design](#man-methods-orthogonalize)\nWhen you might be tempted to dispatch on two or more arguments, consider whether a \"wrapper\" function might make for a simpler design. For example, instead of writing multiple variants:\n```julia\nf(x::A, y::A) = ...\nf(x::A, y::B) = ...\nf(x::B, y::A) = ...\nf(x::B, y::B) = ...\n```\nyou might consider defining\n```julia\nf(x::A, y::A) = ...\nf(x, y) = f(g(x), g(y))\n```\nwhere `g` converts the argument to type `A`. This is a very specific example of the more general principle of [orthogonal design](https://en.wikipedia.org/wiki/Orthogonality_(programming)), in which separate concepts are assigned to separate methods. Here, `g` will most likely need a fallback definition\n```julia\ng(x::A) = x\n```\nA related strategy exploits `promote` to bring `x` and `y` to a common type:\n```julia\nf(x::T, y::T) where {T} = ...\nf(x, y) = f(promote(x, y)...)\n```\nOne risk with this design is the possibility that if there is no suitable promotion method converting `x` and `y` to the same type, the second method will recurse on itself infinitely and trigger a stack overflow."}
{"text": "### [Dispatch on one argument at a time](#Dispatch-on-one-argument-at-a-time)\nIf you need to dispatch on multiple arguments, and there are many fallbacks with too many combinations to make it practical to define all possible variants, then consider introducing a \"name cascade\" where (for example) you dispatch on the first argument and then call an internal method:\n```julia\nf(x::A, y) = _fA(x, y)\nf(x::B, y) = _fB(x, y)\n```\nThen the internal methods `_fA` and `_fB` can dispatch on `y` without concern about ambiguities with each other with respect to `x`.\nBe aware that this strategy has at least one major disadvantage: in many cases, it is not possible for users to further customize the behavior of `f` by defining further specializations of your exported function `f`. Instead, they have to define specializations for your internal methods `_fA` and `_fB`, and this blurs the lines between exported and internal methods."}
{"text": "### [Abstract containers and element types](#Abstract-containers-and-element-types)\nWhere possible, try to avoid defining methods that dispatch on specific element types of abstract containers. For example,\n```julia\n-(A::AbstractArray{T}, b::Date) where {T<:Date}\n```\ngenerates ambiguities for anyone who defines a method\n```julia\n-(A::MyArrayType{T}, b::T) where {T}\n```\nThe best approach is to avoid defining *either* of these methods: instead, rely on a generic method `-(A::AbstractArray, b)` and make sure this method is implemented with generic calls (like `similar` and `-`) that do the right thing for each container type and element type *separately*. This is just a more complex variant of the advice to [orthogonalize](#man-methods-orthogonalize) your methods.\nWhen this approach is not possible, it may be worth starting a discussion with other developers about resolving the ambiguity; just because one method was defined first does not necessarily mean that it can't be modified or eliminated. As a last resort, one developer can define the \"band-aid\" method\n```julia\n-(A::MyArrayType{T}, b::Date) where {T<:Date} = ...\n```\nthat resolves the ambiguity by brute force."}
{"text": "### [Complex method \"cascades\" with default arguments](#Complex-method-%22cascades%22-with-default-arguments)\nIf you are defining a method \"cascade\" that supplies defaults, be careful about dropping any arguments that correspond to potential defaults. For example, suppose you're writing a digital filtering algorithm and you have a method that handles the edges of the signal by applying padding:\n```julia\nfunction myfilter(A, kernel, ::Replicate)\n Apadded = replicate_edges(A, size(kernel))\n myfilter(Apadded, kernel) # now perform the \"real\" computation\nend\n```\nThis will run afoul of a method that supplies default padding:\n```julia\nmyfilter(A, kernel) = myfilter(A, kernel, Replicate()) # replicate the edge by default\n```\nTogether, these two methods generate an infinite recursion with `A` constantly growing bigger.\nThe better design would be to define your call hierarchy like this:\n```julia\nstruct NoPad end # indicate that no padding is desired, or that it's already applied\n\nmyfilter(A, kernel) = myfilter(A, kernel, Replicate()) # default boundary conditions\n\nfunction myfilter(A, kernel, ::Replicate)\n Apadded = replicate_edges(A, size(kernel))\n myfilter(Apadded, kernel, NoPad()) # indicate the new boundary conditions\nend\n\n# other padding methods go here\n\nfunction myfilter(A, kernel, ::NoPad)\n # Here's the \"real\" implementation of the core computation\nend\n```"}
{"text": "### [Complex method \"cascades\" with default arguments](#Complex-method-%22cascades%22-with-default-arguments)\n`NoPad` is supplied in the same argument position as any other kind of padding, so it keeps the dispatch hierarchy well organized and with reduced likelihood of ambiguities. Moreover, it extends the \"public\" `myfilter` interface: a user who wants to control the padding explicitly can call the `NoPad` variant directly."}
{"text": "## [Defining methods in local scope](#Defining-methods-in-local-scope)\nYou can define methods within a [local scope](../variables-and-scoping/#scope-of-variables), for example\n```julia-repl\njulia> function f(x)\n g(y::Int) = y + x\n g(y) = y - x\n g\n end\nf (generic function with 1 method)\n\njulia> h = f(3);\n\njulia> h(4)\n7\n\njulia> h(4.0)\n1.0\n```\nHowever, you should *not* define local methods conditionally or subject to control flow, as in\n```julia\nfunction f2(inc)\n if inc\n g(x) = x + 1\n else\n g(x) = x - 1\n end\nend\n\nfunction f3()\n function g end\n return g\n g() = 0\nend\n```\nas it is not clear what function will end up getting defined. In the future, it might be an error to define local methods in this manner.\nFor cases like this use anonymous functions instead:\n```julia\nfunction f2(inc)\n g = if inc\n x -> x + 1\n else\n x -> x - 1\n end\nend\n```\n- [1](#citeref-1)In C++ or Java, for example, in a method call like `obj.meth(arg1,arg2)`, the object obj \"receives\" the method call and is implicitly passed to the method via the `this` keyword, rather than as an explicit method argument. When the current `this` object is the receiver of a method call, it can be omitted altogether, writing just `meth(arg1,arg2)`, with `this` implied as the receiving object.\n- [Clarke61](#citeref-Clarke61)Arthur C. Clarke, *Profiles of the Future* (1961): Clarke's Third Law.\n------------------------------------------------------------------------"}
{"text": "# Constructors · The Julia Language\nSource: https://docs.julialang.org/en/v1/manual/constructors/"}
{"text": "# [Constructors](#man-constructors)\nConstructors [[1\\]](#footnote-1) are functions that create new objects – specifically, instances of [Composite Types](../types/#Composite-Types). In Julia, type objects also serve as constructor functions: they create new instances of themselves when applied to an argument tuple as a function. This much was already mentioned briefly when composite types were introduced. For example:\n```julia-repl\njulia> struct Foo\n bar\n baz\n end\n\njulia> foo = Foo(1, 2)\nFoo(1, 2)\n\njulia> foo.bar\n1\n\njulia> foo.baz\n2\n```\nFor many types, forming new objects by binding their field values together is all that is ever needed to create instances. However, in some cases more functionality is required when creating composite objects. Sometimes invariants must be enforced, either by checking arguments or by transforming them. [Recursive data structures](https://en.wikipedia.org/wiki/Recursion_%28computer_science%29#Recursive_data_structures_.28structural_recursion.29), especially those that may be self-referential, often cannot be constructed cleanly without first being created in an incomplete state and then altered programmatically to be made whole, as a separate step from object creation. Sometimes, it's just convenient to be able to construct objects with fewer or different types of parameters than they have fields. Julia's system for object construction addresses all of these cases and more."}
{"text": "## [Outer Constructor Methods](#man-outer-constructor-methods)\nA constructor is just like any other function in Julia in that its overall behavior is defined by the combined behavior of its methods. Accordingly, you can add functionality to a constructor by simply defining new methods. For example, let's say you want to add a constructor method for `Foo` objects that takes only one argument and uses the given value for both the `bar` and `baz` fields. This is simple:\n```julia-repl\njulia> Foo(x) = Foo(x,x)\nFoo\n\njulia> Foo(1)\nFoo(1, 1)\n```\nYou could also add a zero-argument `Foo` constructor method that supplies default values for both of the `bar` and `baz` fields:\n```julia-repl\njulia> Foo() = Foo(0)\nFoo\n\njulia> Foo()\nFoo(0, 0)\n```\nHere the zero-argument constructor method calls the single-argument constructor method, which in turn calls the automatically provided two-argument constructor method. For reasons that will become clear very shortly, additional constructor methods declared as normal methods like this are called *outer* constructor methods. Outer constructor methods can only ever create a new instance by calling another constructor method, such as the automatically provided default ones."}
{"text": "## [Inner Constructor Methods](#man-inner-constructor-methods)\nWhile outer constructor methods succeed in addressing the problem of providing additional convenience methods for constructing objects, they fail to address the other two use cases mentioned in the introduction of this chapter: enforcing invariants, and allowing construction of self-referential objects. For these problems, one needs *inner* constructor methods. An inner constructor method is like an outer constructor method, except for two differences:\n1. It is declared inside the block of a type declaration, rather than outside of it like normal methods.\n2. It has access to a special locally existent function called [`new`](../../base/base/#new) that creates objects of the block's type.\nFor example, suppose one wants to declare a type that holds a pair of real numbers, subject to the constraint that the first number is not greater than the second one. One could declare it like this:\n```julia-repl\njulia> struct OrderedPair\n x::Real\n y::Real\n OrderedPair(x,y) = x > y ? error(\"out of order\") : new(x,y)\n end\n```\nNow `OrderedPair` objects can only be constructed such that `x <= y`:\n```julia-repl\njulia> OrderedPair(1, 2)\nOrderedPair(1, 2)\n\njulia> OrderedPair(2,1)\nERROR: out of order\nStacktrace:\n [1] error at ./error.jl:33 [inlined]\n [2] OrderedPair(::Int64, ::Int64) at ./none:4\n [3] top-level scope\n```"}
{"text": "## [Inner Constructor Methods](#man-inner-constructor-methods)\nIf the type were declared `mutable`, you could reach in and directly change the field values to violate this invariant. Of course, messing around with an object's internals uninvited is bad practice. You (or someone else) can also provide additional outer constructor methods at any later point, but once a type is declared, there is no way to add more inner constructor methods. Since outer constructor methods can only create objects by calling other constructor methods, ultimately, some inner constructor must be called to create an object. This guarantees that all objects of the declared type must come into existence by a call to one of the inner constructor methods provided with the type, thereby giving some degree of enforcement of a type's invariants.\nIf any inner constructor method is defined, no default constructor method is provided: it is presumed that you have supplied yourself with all the inner constructors you need. The default constructor is equivalent to writing your own inner constructor method that takes all of the object's fields as parameters (constrained to be of the correct type, if the corresponding field has a type), and passes them to `new`, returning the resulting object:\n```julia-repl\njulia> struct Foo\n bar\n baz\n Foo(bar,baz) = new(bar,baz)\n end\n```"}
{"text": "## [Inner Constructor Methods](#man-inner-constructor-methods)\nThis declaration has the same effect as the earlier definition of the `Foo` type without an explicit inner constructor method. The following two types are equivalent – one with a default constructor, the other with an explicit constructor:\n```julia-repl\njulia> struct T1\n x::Int64\n end\n\njulia> struct T2\n x::Int64\n T2(x) = new(x)\n end\n\njulia> T1(1)\nT1(1)\n\njulia> T2(1)\nT2(1)\n\njulia> T1(1.0)\nT1(1)\n\njulia> T2(1.0)\nT2(1)\n```\nIt is good practice to provide as few inner constructor methods as possible: only those taking all arguments explicitly and enforcing essential error checking and transformation. Additional convenience constructor methods, supplying default values or auxiliary transformations, should be provided as outer constructors that call the inner constructors to do the heavy lifting. This separation is typically quite natural."}
{"text": "## [Incomplete Initialization](#Incomplete-Initialization)\nThe final problem which has still not been addressed is construction of self-referential objects, or more generally, recursive data structures. Since the fundamental difficulty may not be immediately obvious, let us briefly explain it. Consider the following recursive type declaration:\n```julia-repl\njulia> mutable struct SelfReferential\n obj::SelfReferential\n end\n```\nThis type may appear innocuous enough, until one considers how to construct an instance of it. If `a` is an instance of `SelfReferential`, then a second instance can be created by the call:\n```julia-repl\njulia> b = SelfReferential(a)\n```\nBut how does one construct the first instance when no instance exists to provide as a valid value for its `obj` field? The only solution is to allow creating an incompletely initialized instance of `SelfReferential` with an unassigned `obj` field, and using that incomplete instance as a valid value for the `obj` field of another instance, such as, for example, itself."}
{"text": "## [Incomplete Initialization](#Incomplete-Initialization)\nTo allow for the creation of incompletely initialized objects, Julia allows the [`new`](../../base/base/#new) function to be called with fewer than the number of fields that the type has, returning an object with the unspecified fields uninitialized. The inner constructor method can then use the incomplete object, finishing its initialization before returning it. Here, for example, is another attempt at defining the `SelfReferential` type, this time using a zero-argument inner constructor returning instances having `obj` fields pointing to themselves:\n```julia-repl\njulia> mutable struct SelfReferential\n obj::SelfReferential\n SelfReferential() = (x = new(); x.obj = x)\n end\n```\nWe can verify that this constructor works and constructs objects that are, in fact, self-referential:\n```julia-repl\njulia> x = SelfReferential();\n\njulia> x === x\ntrue\n\njulia> x === x.obj\ntrue\n\njulia> x === x.obj.obj\ntrue\n```\nAlthough it is generally a good idea to return a fully initialized object from an inner constructor, it is possible to return incompletely initialized objects:\n```julia-repl\njulia> mutable struct Incomplete\n data\n Incomplete() = new()\n end\n\njulia> z = Incomplete();\n```\nWhile you are allowed to create objects with uninitialized fields, any access to an uninitialized reference is an immediate error:\n```julia-repl\njulia> z.data\nERROR: UndefRefError: access to undefined reference\n```"}
{"text": "## [Incomplete Initialization](#Incomplete-Initialization)\nThis avoids the need to continually check for `null` values. However, not all object fields are references. Julia considers some types to be \"plain data\", meaning all of their data is self-contained and does not reference other objects. The plain data types consist of primitive types (e.g. `Int`) and immutable structs of other plain data types (see also: [`isbits`](../../base/base/#Base.isbits), [`isbitstype`](../../base/base/#Base.isbitstype)). The initial contents of a plain data type is undefined:\n```julia-repl\njulia> struct HasPlain\n n::Int\n HasPlain() = new()\n end\n\njulia> HasPlain()\nHasPlain(438103441441)\n```\nArrays of plain data types exhibit the same behavior.\nYou can pass incomplete objects to other functions from inner constructors to delegate their completion:\n```julia-repl\njulia> mutable struct Lazy\n data\n Lazy(v) = complete_me(new(), v)\n end\n```\nAs with incomplete objects returned from constructors, if `complete_me` or any of its callees try to access the `data` field of the `Lazy` object before it has been initialized, an error will be thrown immediately."}
{"text": "## [Parametric Constructors](#Parametric-Constructors)\nParametric types add a few wrinkles to the constructor story. Recall from [Parametric Types](../types/#Parametric-Types) that, by default, instances of parametric composite types can be constructed either with explicitly given type parameters or with type parameters implied by the types of the arguments given to the constructor. Here are some examples:\n```julia-repl\njulia> struct Point{T<:Real}\n x::T\n y::T\n end\n\njulia> Point(1,2) ## implicit T ##\nPoint{Int64}(1, 2)\n\njulia> Point(1.0,2.5) ## implicit T ##\nPoint{Float64}(1.0, 2.5)\n\njulia> Point(1,2.5) ## implicit T ##\nERROR: MethodError: no method matching Point(::Int64, ::Float64)\nThe type `Point` exists, but no method is defined for this combination of argument types when trying to construct it.\n\nClosest candidates are:\n Point(::T, ::T) where T<:Real at none:2\n\njulia> Point{Int64}(1, 2) ## explicit T ##\nPoint{Int64}(1, 2)\n\njulia> Point{Int64}(1.0,2.5) ## explicit T ##\nERROR: InexactError: Int64(2.5)\nStacktrace:\n[...]\n\njulia> Point{Float64}(1.0, 2.5) ## explicit T ##\nPoint{Float64}(1.0, 2.5)\n\njulia> Point{Float64}(1,2) ## explicit T ##\nPoint{Float64}(1.0, 2.0)\n```"}
{"text": "## [Parametric Constructors](#Parametric-Constructors)\nAs you can see, for constructor calls with explicit type parameters, the arguments are converted to the implied field types: `Point{Int64}(1,2)` works, but `Point{Int64}(1.0,2.5)` raises an [`InexactError`](../../base/base/#Core.InexactError) when converting `2.5` to [`Int64`](../../base/numbers/#Core.Int64). When the type is implied by the arguments to the constructor call, as in `Point(1,2)`, then the types of the arguments must agree – otherwise the `T` cannot be determined – but any pair of real arguments with matching type may be given to the generic `Point` constructor.\nWhat's really going on here is that `Point`, `Point{Float64}` and `Point{Int64}` are all different constructor functions. In fact, `Point{T}` is a distinct constructor function for each type `T`. Without any explicitly provided inner constructors, the declaration of the composite type `Point{T<:Real}` automatically provides an inner constructor, `Point{T}`, for each possible type `T<:Real`, that behaves just like non-parametric default inner constructors do. It also provides a single general outer `Point` constructor that takes pairs of real arguments, which must be of the same type. This automatic provision of constructors is equivalent to the following explicit declaration:\n```julia-repl\njulia> struct Point{T<:Real}\n x::T\n y::T\n Point{T}(x,y) where {T<:Real} = new(x,y)\n end\n\njulia> Point(x::T, y::T) where {T<:Real} = Point{T}(x,y);\n```"}
{"text": "## [Parametric Constructors](#Parametric-Constructors)\nNotice that each definition looks like the form of constructor call that it handles. The call `Point{Int64}(1,2)` will invoke the definition `Point{T}(x,y)` inside the `struct` block. The outer constructor declaration, on the other hand, defines a method for the general `Point` constructor which only applies to pairs of values of the same real type. This declaration makes constructor calls without explicit type parameters, like `Point(1,2)` and `Point(1.0,2.5)`, work. Since the method declaration restricts the arguments to being of the same type, calls like `Point(1,2.5)`, with arguments of different types, result in \"no method\" errors.\nSuppose we wanted to make the constructor call `Point(1,2.5)` work by \"promoting\" the integer value `1` to the floating-point value `1.0`. The simplest way to achieve this is to define the following additional outer constructor method:\n```julia-repl\njulia> Point(x::Int64, y::Float64) = Point(convert(Float64,x),y);\n```\nThis method uses the [`convert`](../../base/base/#Base.convert) function to explicitly convert `x` to [`Float64`](../../base/numbers/#Core.Float64) and then delegates construction to the general constructor for the case where both arguments are [`Float64`](../../base/numbers/#Core.Float64). With this method definition what was previously a [`MethodError`](../../base/base/#Core.MethodError) now successfully creates a point of type `Point{Float64}`:\n```julia-repl\njulia> p = Point(1,2.5)\nPoint{Float64}(1.0, 2.5)\n\njulia> typeof(p)\nPoint{Float64}\n```"}
{"text": "## [Parametric Constructors](#Parametric-Constructors)\nHowever, other similar calls still don't work:\n```julia-repl\njulia> Point(1.5,2)\nERROR: MethodError: no method matching Point(::Float64, ::Int64)\nThe type `Point` exists, but no method is defined for this combination of argument types when trying to construct it.\n\nClosest candidates are:\n Point(::T, !Matched::T) where T<:Real\n @ Main none:1\n Point(!Matched::Int64, !Matched::Float64)\n @ Main none:1\n\nStacktrace:\n[...]\n```\nFor a more general way to make all such calls work sensibly, see [Conversion and Promotion](../conversion-and-promotion/#conversion-and-promotion). At the risk of spoiling the suspense, we can reveal here that all it takes is the following outer method definition to make all calls to the general `Point` constructor work as one would expect:\n```julia-repl\njulia> Point(x::Real, y::Real) = Point(promote(x,y)...);\n```\nThe `promote` function converts all its arguments to a common type – in this case [`Float64`](../../base/numbers/#Core.Float64). With this method definition, the `Point` constructor promotes its arguments the same way that numeric operators like [`+`](../../base/math/#Base.:+) do, and works for all kinds of real numbers:\n```julia-repl\njulia> Point(1.5,2)\nPoint{Float64}(1.5, 2.0)\n\njulia> Point(1,1//2)\nPoint{Rational{Int64}}(1//1, 1//2)\n\njulia> Point(1.0,1//2)\nPoint{Float64}(1.0, 0.5)\n```"}
{"text": "## [Parametric Constructors](#Parametric-Constructors)\nThus, while the implicit type parameter constructors provided by default in Julia are fairly strict, it is possible to make them behave in a more relaxed but sensible manner quite easily. Moreover, since constructors can leverage all of the power of the type system, methods, and multiple dispatch, defining sophisticated behavior is typically quite simple."}
{"text": "## [Case Study: Rational](#Case-Study:-Rational)\nPerhaps the best way to tie all these pieces together is to present a real world example of a parametric composite type and its constructor methods. To that end, we implement our own rational number type `OurRational`, similar to Julia's built-in [`Rational`](../../base/numbers/#Base.Rational) type, defined in [`rational.jl`](https://github.com/JuliaLang/julia/blob/master/base/rational.jl):\n```julia-repl\njulia> struct OurRational{T<:Integer} <: Real\n num::T\n den::T\n function OurRational{T}(num::T, den::T) where T<:Integer\n if num == 0 && den == 0\n error(\"invalid rational: 0//0\")\n end\n num = flipsign(num, den)\n den = flipsign(den, den)\n g = gcd(num, den)\n num = div(num, g)\n den = div(den, g)\n new(num, den)\n end\n end\n\njulia> OurRational(n::T, d::T) where {T<:Integer} = OurRational{T}(n,d)\nOurRational\n\njulia> OurRational(n::Integer, d::Integer) = OurRational(promote(n,d)...)\nOurRational\n\njulia> OurRational(n::Integer) = OurRational(n,one(n))\nOurRational\n\njulia> ⊘(n::Integer, d::Integer) = OurRational(n,d)\n⊘ (generic function with 1 method)\n\njulia> ⊘(x::OurRational, y::Integer) = x.num ⊘ (x.den*y)\n⊘ (generic function with 2 methods)\n\njulia> ⊘(x::Integer, y::OurRational) = (x*y.den) ⊘ y.num\n⊘ (generic function with 3 methods)\n\njulia> ⊘(x::Complex, y::Real) = complex(real(x) ⊘ y, imag(x) ⊘ y)\n⊘ (generic function with 4 methods)\n\njulia> ⊘(x::Real, y::Complex) = (x*y') ⊘ real(y*y')\n⊘ (generic function with 5 methods)\n\njulia> function ⊘(x::Complex, y::Complex)\n xy = x*y'\n yy = real(y*y')\n complex(real(xy) ⊘ yy, imag(xy) ⊘ yy)\n end\n⊘ (generic function with 6 methods)\n```"}
{"text": "## [Case Study: Rational](#Case-Study:-Rational)\nThe first line – `struct OurRational{T<:Integer} <: Real` – declares that `OurRational` takes one type parameter of an integer type, and is itself a real type. The field declarations `num::T` and `den::T` indicate that the data held in a `OurRational{T}` object are a pair of integers of type `T`, one representing the rational value's numerator and the other representing its denominator.\nNow things get interesting. `OurRational` has a single inner constructor method which checks that `num` and `den` aren't both zero and ensures that every rational is constructed in \"lowest terms\" with a non-negative denominator. This is accomplished by first flipping the signs of numerator and denominator if the denominator is negative. Then, both are divided by their greatest common divisor (`gcd` always returns a non-negative number, regardless of the sign of its arguments). Because this is the only inner constructor for `OurRational`, we can be certain that `OurRational` objects are always constructed in this normalized form."}
{"text": "## [Case Study: Rational](#Case-Study:-Rational)\n`OurRational` also provides several outer constructor methods for convenience. The first is the \"standard\" general constructor that infers the type parameter `T` from the type of the numerator and denominator when they have the same type. The second applies when the given numerator and denominator values have different types: it promotes them to a common type and then delegates construction to the outer constructor for arguments of matching type. The third outer constructor turns integer values into rationals by supplying a value of `1` as the denominator."}
{"text": "## [Case Study: Rational](#Case-Study:-Rational)\nFollowing the outer constructor definitions, we defined a number of methods for the `⊘` operator, which provides a syntax for writing rationals (e.g. `1 ⊘ 2`). Julia's `Rational` type uses the [`//`](../../base/math/#Base.://) operator for this purpose. Before these definitions, `⊘` is a completely undefined operator with only syntax and no meaning. Afterwards, it behaves just as described in [Rational Numbers](../complex-and-rational-numbers/#Rational-Numbers) – its entire behavior is defined in these few lines. Note that the infix use of `⊘` works because Julia has a set of symbols that are recognized to be infix operators. The first and most basic definition just makes `a ⊘ b` construct a `OurRational` by applying the `OurRational` constructor to `a` and `b` when they are integers. When one of the operands of `⊘` is already a rational number, we construct a new rational for the resulting ratio slightly differently; this behavior is actually identical to division of a rational with an integer. Finally, applying `⊘` to complex integral values creates an instance of `Complex{<:OurRational}` – a complex number whose real and imaginary parts are rationals:\n```julia-repl\njulia> z = (1 + 2im) ⊘ (1 - 2im);\n\njulia> typeof(z)\nComplex{OurRational{Int64}}\n\njulia> typeof(z) <: Complex{<:OurRational}\ntrue\n```"}
{"text": "## [Case Study: Rational](#Case-Study:-Rational)\nThus, although the `⊘` operator usually returns an instance of `OurRational`, if either of its arguments are complex integers, it will return an instance of `Complex{<:OurRational}` instead. The interested reader should consider perusing the rest of [`rational.jl`](https://github.com/JuliaLang/julia/blob/master/base/rational.jl): it is short, self-contained, and implements an entire basic Julia type."}
{"text": "## [Outer-only constructors](#Outer-only-constructors)\nAs we have seen, a typical parametric type has inner constructors that are called when type parameters are known; e.g. they apply to `Point{Int}` but not to `Point`. Optionally, outer constructors that determine type parameters automatically can be added, for example constructing a `Point{Int}` from the call `Point(1,2)`. Outer constructors call inner constructors to actually make instances. However, in some cases one would rather not provide inner constructors, so that specific type parameters cannot be requested manually.\nFor example, say we define a type that stores a vector along with an accurate representation of its sum:\n```julia-repl\njulia> struct SummedArray{T<:Number,S<:Number}\n data::Vector{T}\n sum::S\n end\n\njulia> SummedArray(Int32[1; 2; 3], Int32(6))\nSummedArray{Int32, Int32}(Int32[1, 2, 3], 6)\n```\nThe problem is that we want `S` to be a larger type than `T`, so that we can sum many elements with less information loss. For example, when `T` is [`Int32`](../../base/numbers/#Core.Int32), we would like `S` to be [`Int64`](../../base/numbers/#Core.Int64). Therefore we want to avoid an interface that allows the user to construct instances of the type `SummedArray{Int32,Int32}`. One way to do this is to provide a constructor only for `SummedArray`, but inside the `struct` definition block to suppress generation of default constructors:"}
{"text": "## [Outer-only constructors](#Outer-only-constructors)\n```julia-repl\njulia> struct SummedArray{T<:Number,S<:Number}\n data::Vector{T}\n sum::S\n function SummedArray(a::Vector{T}) where T\n S = widen(T)\n new{T,S}(a, sum(S, a))\n end\n end\n\njulia> SummedArray(Int32[1; 2; 3], Int32(6))\nERROR: MethodError: no method matching SummedArray(::Vector{Int32}, ::Int32)\nThe type `SummedArray` exists, but no method is defined for this combination of argument types when trying to construct it.\n\nClosest candidates are:\n SummedArray(::Vector{T}) where T\n @ Main none:4\n\nStacktrace:\n[...]\n```\nThis constructor will be invoked by the syntax `SummedArray(a)`. The syntax `new{T,S}` allows specifying parameters for the type to be constructed, i.e. this call will return a `SummedArray{T,S}`. `new{T,S}` can be used in any constructor definition, but for convenience the parameters to `new{}` are automatically derived from the type being constructed when possible."}
{"text": "## [Constructors are just callable objects](#Constructors-are-just-callable-objects)\nAn object of any type may be [made callable](../methods/#Function-like-objects) by defining a method. This includes types, i.e., objects of type [`Type`](../../base/base/#Core.Type); and constructors may, in fact, be viewed as just callable type objects. For example, there are many methods defined on `Bool` and various supertypes of it:\n```julia-repl\njulia> methods(Bool)\n# 10 methods for type constructor:\n [1] Bool(x::BigFloat)\n @ Base.MPFR mpfr.jl:393\n [2] Bool(x::Float16)\n @ Base float.jl:338\n [3] Bool(x::Rational)\n @ Base rational.jl:138\n [4] Bool(x::Real)\n @ Base float.jl:233\n [5] (dt::Type{<:Integer})(ip::Sockets.IPAddr)\n @ Sockets ~/tmp/jl/jl/julia-nightly-assert/share/julia/stdlib/v1.11/Sockets/src/IPAddr.jl:11\n [6] (::Type{T})(x::Enum{T2}) where {T<:Integer, T2<:Integer}\n @ Base.Enums Enums.jl:19\n [7] (::Type{T})(z::Complex) where T<:Real\n @ Base complex.jl:44\n [8] (::Type{T})(x::Base.TwicePrecision) where T<:Number\n @ Base twiceprecision.jl:265\n [9] (::Type{T})(x::T) where T<:Number\n @ boot.jl:894\n [10] (::Type{T})(x::AbstractChar) where T<:Union{AbstractChar, Number}\n @ char.jl:50\n```\nThe usual constructor syntax is exactly equivalent to the function-like object syntax, so trying to define a method with each syntax will cause the first method to be overwritten by the next one:"}
{"text": "## [Constructors are just callable objects](#Constructors-are-just-callable-objects)\n```julia-repl\njulia> struct S\n f::Int\n end\n\njulia> S() = S(7)\nS\n\njulia> (::Type{S})() = S(8) # overwrites the previous constructor method\n\njulia> S()\nS(8)\n```\n- [1](#citeref-1)Nomenclature: while the term \"constructor\" generally refers to the entire function which constructs objects of a type, it is common to abuse terminology slightly and refer to specific constructor methods as \"constructors\". In such situations, it is generally clear from the context that the term is used to mean \"constructor method\" rather than \"constructor function\", especially as it is often used in the sense of singling out a particular method of the constructor from all of the others.\n------------------------------------------------------------------------"}
{"text": "# Conversion and Promotion · The Julia Language\nSource: https://docs.julialang.org/en/v1/manual/conversion-and-promotion/"}
{"text": "# [Conversion and Promotion](#conversion-and-promotion)\nJulia has a system for promoting arguments of mathematical operators to a common type, which has been mentioned in various other sections, including [Integers and Floating-Point Numbers](../integers-and-floating-point-numbers/#Integers-and-Floating-Point-Numbers), [Mathematical Operations and Elementary Functions](../mathematical-operations/#Mathematical-Operations-and-Elementary-Functions), [Types](../types/#man-types), and [Methods](../methods/#Methods). In this section, we explain how this promotion system works, as well as how to extend it to new types and apply it to functions besides built-in mathematical operators. Traditionally, programming languages fall into two camps with respect to promotion of arithmetic arguments:"}
{"text": "# [Conversion and Promotion](#conversion-and-promotion)\n- **Automatic promotion for built-in arithmetic types and operators.** In most languages, built-in numeric types, when used as operands to arithmetic operators with infix syntax, such as `+`, `-`, `*`, and `/`, are automatically promoted to a common type to produce the expected results. C, Java, Perl, and Python, to name a few, all correctly compute the sum `1 + 1.5` as the floating-point value `2.5`, even though one of the operands to `+` is an integer. These systems are convenient and designed carefully enough that they are generally all-but-invisible to the programmer: hardly anyone consciously thinks of this promotion taking place when writing such an expression, but compilers and interpreters must perform conversion before addition since integers and floating-point values cannot be added as-is. Complex rules for such automatic conversions are thus inevitably part of specifications and implementations for such languages."}
{"text": "# [Conversion and Promotion](#conversion-and-promotion)\n- **No automatic promotion.** This camp includes Ada and ML – very \"strict\" statically typed languages. In these languages, every conversion must be explicitly specified by the programmer. Thus, the example expression `1 + 1.5` would be a compilation error in both Ada and ML. Instead one must write `real(1) + 1.5`, explicitly converting the integer `1` to a floating-point value before performing addition. Explicit conversion everywhere is so inconvenient, however, that even Ada has some degree of automatic conversion: integer literals are promoted to the expected integer type automatically, and floating-point literals are similarly promoted to appropriate floating-point types."}
{"text": "# [Conversion and Promotion](#conversion-and-promotion)\nIn a sense, Julia falls into the \"no automatic promotion\" category: mathematical operators are just functions with special syntax, and the arguments of functions are never automatically converted. However, one may observe that applying mathematical operations to a wide variety of mixed argument types is just an extreme case of polymorphic multiple dispatch – something which Julia's dispatch and type systems are particularly well-suited to handle. \"Automatic\" promotion of mathematical operands simply emerges as a special application: Julia comes with pre-defined catch-all dispatch rules for mathematical operators, invoked when no specific implementation exists for some combination of operand types. These catch-all rules first promote all operands to a common type using user-definable promotion rules, and then invoke a specialized implementation of the operator in question for the resulting values, now of the same type. User-defined types can easily participate in this promotion system by defining methods for conversion to and from other types, and providing a handful of promotion rules defining what types they should promote to when mixed with other types."}
{"text": "## [Conversion](#Conversion)\nThe standard way to obtain a value of a certain type `T` is to call the type's constructor, `T(x)`. However, there are cases where it's convenient to convert a value from one type to another without the programmer asking for it explicitly. One example is assigning a value into an array: if `A` is a `Vector{Float64}`, the expression `A[1] = 2` should work by automatically converting the `2` from `Int` to `Float64`, and storing the result in the array. This is done via the [`convert`](../../base/base/#Base.convert) function.\nThe `convert` function generally takes two arguments: the first is a type object and the second is a value to convert to that type. The returned value is the value converted to an instance of given type. The simplest way to understand this function is to see it in action:\n```julia-repl\njulia> x = 12\n12\n\njulia> typeof(x)\nInt64\n\njulia> xu = convert(UInt8, x)\n0x0c\n\njulia> typeof(xu)\nUInt8\n\njulia> xf = convert(AbstractFloat, x)\n12.0\n\njulia> typeof(xf)\nFloat64\n\njulia> a = Any[1 2 3; 4 5 6]\n2×3 Matrix{Any}:\n 1 2 3\n 4 5 6\n\njulia> convert(Array{Float64}, a)\n2×3 Matrix{Float64}:\n 1.0 2.0 3.0\n 4.0 5.0 6.0\n```\nConversion isn't always possible, in which case a [`MethodError`](../../base/base/#Core.MethodError) is thrown indicating that `convert` doesn't know how to perform the requested conversion:\n```julia-repl\njulia> convert(AbstractFloat, \"foo\")\nERROR: MethodError: Cannot `convert` an object of type String to an object of type AbstractFloat\n[...]\n```"}
{"text": "## [Conversion](#Conversion)\nSome languages consider parsing strings as numbers or formatting numbers as strings to be conversions (many dynamic languages will even perform conversion for you automatically). This is not the case in Julia. Even though some strings can be parsed as numbers, most strings are not valid representations of numbers, and only a very limited subset of them are. Therefore in Julia the dedicated [`parse`](../../base/numbers/#Base.parse) function must be used to perform this operation, making it more explicit."}
{"text": "### [When is convert called?](#When-is-convert-called?)\nThe following language constructs call `convert`:\n- Assigning to an array converts to the array's element type.\n- Assigning to a field of an object converts to the declared type of the field.\n- Constructing an object with [`new`](../../base/base/#new) converts to the object's declared field types.\n- Assigning to a variable with a declared type (e.g. `local x::T`) converts to that type.\n- A function with a declared return type converts its return value to that type.\n- Passing a value to [`ccall`](../../base/c/#ccall) converts it to the corresponding argument type."}
{"text": "### [Conversion vs. Construction](#Conversion-vs.-Construction)\nNote that the behavior of `convert(T, x)` appears to be nearly identical to `T(x)`. Indeed, it usually is. However, there is a key semantic difference: since `convert` can be called implicitly, its methods are restricted to cases that are considered \"safe\" or \"unsurprising\". `convert` will only convert between types that represent the same basic kind of thing (e.g. different representations of numbers, or different string encodings). It is also usually lossless; converting a value to a different type and back again should result in the exact same value.\nThere are four general kinds of cases where constructors differ from `convert`:"}
{"text": "#### [Constructors for types unrelated to their arguments](#Constructors-for-types-unrelated-to-their-arguments)\nSome constructors don't implement the concept of \"conversion\". For example, `Timer(2)` creates a 2-second timer, which is not really a \"conversion\" from an integer to a timer."}
{"text": "#### [Mutable collections](#Mutable-collections)\n`convert(T, x)` is expected to return the original `x` if `x` is already of type `T`. In contrast, if `T` is a mutable collection type then `T(x)` should always make a new collection (copying elements from `x`)."}
{"text": "#### [Wrapper types](#Wrapper-types)\nFor some types which \"wrap\" other values, the constructor may wrap its argument inside a new object even if it is already of the requested type. For example `Some(x)` wraps `x` to indicate that a value is present (in a context where the result might be a `Some` or `nothing`). However, `x` itself might be the object `Some(y)`, in which case the result is `Some(Some(y))`, with two levels of wrapping. `convert(Some, x)`, on the other hand, would just return `x` since it is already a `Some`."}
{"text": "#### [Constructors that don't return instances of their own type](#Constructors-that-don't-return-instances-of-their-own-type)\nIn *very rare* cases it might make sense for the constructor `T(x)` to return an object not of type `T`. This could happen if a wrapper type is its own inverse (e.g. `Flip(Flip(x)) === x`), or to support an old calling syntax for backwards compatibility when a library is restructured. But `convert(T, x)` should always return a value of type `T`."}
{"text": "### [Defining New Conversions](#Defining-New-Conversions)\nWhen defining a new type, initially all ways of creating it should be defined as constructors. If it becomes clear that implicit conversion would be useful, and that some constructors meet the above \"safety\" criteria, then `convert` methods can be added. These methods are typically quite simple, as they only need to call the appropriate constructor. Such a definition might look like this:\n```julia\nimport Base: convert\nconvert(::Type{MyType}, x) = MyType(x)\n```\nThe type of the first argument of this method is [`Type{MyType}`](../types/#man-typet-type), the only instance of which is `MyType`. Thus, this method is only invoked when the first argument is the type value `MyType`. Notice the syntax used for the first argument: the argument name is omitted prior to the `::` symbol, and only the type is given. This is the syntax in Julia for a function argument whose type is specified but whose value does not need to be referenced by name.\nAll instances of some abstract types are by default considered \"sufficiently similar\" that a universal `convert` definition is provided in Julia Base. For example, this definition states that it's valid to `convert` any `Number` type to any other by calling a 1-argument constructor:\n```julia\nconvert(::Type{T}, x::Number) where {T<:Number} = T(x)::T\n```\nThis means that new `Number` types only need to define constructors, since this definition will handle `convert` for them. An identity conversion is also provided to handle the case where the argument is already of the requested type:"}
{"text": "### [Defining New Conversions](#Defining-New-Conversions)\n```julia\nconvert(::Type{T}, x::T) where {T<:Number} = x\n```\nSimilar definitions exist for `AbstractString`, [`AbstractArray`](../../base/arrays/#Core.AbstractArray), and [`AbstractDict`](../../base/collections/#Base.AbstractDict)."}
{"text": "## [Promotion](#Promotion)\nPromotion refers to converting values of mixed types to a single common type. Although it is not strictly necessary, it is generally implied that the common type to which the values are converted can faithfully represent all of the original values. In this sense, the term \"promotion\" is appropriate since the values are converted to a \"greater\" type – i.e. one which can represent all of the input values in a single common type. It is important, however, not to confuse this with object-oriented (structural) super-typing, or Julia's notion of abstract super-types: promotion has nothing to do with the type hierarchy, and everything to do with converting between alternate representations. For instance, although every [`Int32`](../../base/numbers/#Core.Int32) value can also be represented as a [`Float64`](../../base/numbers/#Core.Float64) value, `Int32` is not a subtype of `Float64`.\nPromotion to a common \"greater\" type is performed in Julia by the [`promote`](../../base/base/#Base.promote) function, which takes any number of arguments, and returns a tuple of the same number of values, converted to a common type, or throws an exception if promotion is not possible. The most common use case for promotion is to convert numeric arguments to a common type:\n```julia-repl\njulia> promote(1, 2.5)\n(1.0, 2.5)\n\njulia> promote(1, 2.5, 3)\n(1.0, 2.5, 3.0)\n\njulia> promote(2, 3//4)\n(2//1, 3//4)\n\njulia> promote(1, 2.5, 3, 3//4)\n(1.0, 2.5, 3.0, 0.75)\n\njulia> promote(1.5, im)\n(1.5 + 0.0im, 0.0 + 1.0im)\n\njulia> promote(1 + 2im, 3//4)\n(1//1 + 2//1*im, 3//4 + 0//1*im)\n```"}
{"text": "## [Promotion](#Promotion)\nFloating-point values are promoted to the largest of the floating-point argument types. Integer values are promoted to the largest of the integer argument types. If the types are the same size but differ in signedness, the unsigned type is chosen. Mixtures of integers and floating-point values are promoted to a floating-point type big enough to hold all the values. Integers mixed with rationals are promoted to rationals. Rationals mixed with floats are promoted to floats. Complex values mixed with real values are promoted to the appropriate kind of complex value.\nThat is really all there is to using promotions. The rest is just a matter of clever application, the most typical \"clever\" application being the definition of catch-all methods for numeric operations like the arithmetic operators `+`, `-`, `*` and `/`. Here are some of the catch-all method definitions given in [`promotion.jl`](https://github.com/JuliaLang/julia/blob/master/base/promotion.jl):\n```julia\n+(x::Number, y::Number) = +(promote(x,y)...)\n-(x::Number, y::Number) = -(promote(x,y)...)\n*(x::Number, y::Number) = *(promote(x,y)...)\n/(x::Number, y::Number) = /(promote(x,y)...)\n```"}
{"text": "## [Promotion](#Promotion)\nThese method definitions say that in the absence of more specific rules for adding, subtracting, multiplying and dividing pairs of numeric values, promote the values to a common type and then try again. That's all there is to it: nowhere else does one ever need to worry about promotion to a common numeric type for arithmetic operations – it just happens automatically. There are definitions of catch-all promotion methods for a number of other arithmetic and mathematical functions in [`promotion.jl`](https://github.com/JuliaLang/julia/blob/master/base/promotion.jl), but beyond that, there are hardly any calls to `promote` required in Julia Base. The most common usages of `promote` occur in outer constructors methods, provided for convenience, to allow constructor calls with mixed types to delegate to an inner type with fields promoted to an appropriate common type. For example, recall that [`rational.jl`](https://github.com/JuliaLang/julia/blob/master/base/rational.jl) provides the following outer constructor method:\n```julia\nRational(n::Integer, d::Integer) = Rational(promote(n,d)...)\n```\nThis allows calls like the following to work:\n```julia-repl\njulia> x = Rational(Int8(15),Int32(-5))\n-3//1\n\njulia> typeof(x)\nRational{Int32}\n```\nFor most user-defined types, it is better practice to require programmers to supply the expected types to constructor functions explicitly, but sometimes, especially for numeric problems, it can be convenient to do promotion automatically."}
{"text": "### [Defining Promotion Rules](#Defining-Promotion-Rules)\nAlthough one could, in principle, define methods for the `promote` function directly, this would require many redundant definitions for all possible permutations of argument types. Instead, the behavior of `promote` is defined in terms of an auxiliary function called [`promote_rule`](../../base/base/#Base.promote_rule), which one can provide methods for. The `promote_rule` function takes a pair of type objects and returns another type object, such that instances of the argument types will be promoted to the returned type. Thus, by defining the rule:\n```julia\nimport Base: promote_rule\npromote_rule(::Type{Float64}, ::Type{Float32}) = Float64\n```\none declares that when 64-bit and 32-bit floating-point values are promoted together, they should be promoted to 64-bit floating-point. The promotion type does not need to be one of the argument types. For example, the following promotion rules both occur in Julia Base:\n```julia\npromote_rule(::Type{BigInt}, ::Type{Float64}) = BigFloat\npromote_rule(::Type{BigInt}, ::Type{Int8}) = BigInt\n```\nIn the latter case, the result type is [`BigInt`](../../base/numbers/#Base.GMP.BigInt) since `BigInt` is the only type large enough to hold integers for arbitrary-precision integer arithmetic. Also note that one does not need to define both `promote_rule(::Type{A}, ::Type{B})` and `promote_rule(::Type{B}, ::Type{A})` – the symmetry is implied by the way `promote_rule` is used in the promotion process."}
{"text": "### [Defining Promotion Rules](#Defining-Promotion-Rules)\nThe `promote_rule` function is used as a building block to define a second function called [`promote_type`](../../base/base/#Base.promote_type), which, given any number of type objects, returns the common type to which those values, as arguments to `promote` should be promoted. Thus, if one wants to know, in absence of actual values, what type a collection of values of certain types would promote to, one can use `promote_type`:\n```julia-repl\njulia> promote_type(Int8, Int64)\nInt64\n```\nNote that we do **not** overload `promote_type` directly: we overload `promote_rule` instead. `promote_type` uses `promote_rule`, and adds the symmetry. Overloading it directly can cause ambiguity errors. We overload `promote_rule` to define how things should be promoted, and we use `promote_type` to query that.\nInternally, `promote_type` is used inside of `promote` to determine what type argument values should be converted to for promotion. The curious reader can read the code in [`promotion.jl`](https://github.com/JuliaLang/julia/blob/master/base/promotion.jl), which defines the complete promotion mechanism in about 35 lines."}
{"text": "### [Case Study: Rational Promotions](#Case-Study:-Rational-Promotions)\nFinally, we finish off our ongoing case study of Julia's rational number type, which makes relatively sophisticated use of the promotion mechanism with the following promotion rules:\n```julia\nimport Base: promote_rule\npromote_rule(::Type{Rational{T}}, ::Type{S}) where {T<:Integer,S<:Integer} = Rational{promote_type(T,S)}\npromote_rule(::Type{Rational{T}}, ::Type{Rational{S}}) where {T<:Integer,S<:Integer} = Rational{promote_type(T,S)}\npromote_rule(::Type{Rational{T}}, ::Type{S}) where {T<:Integer,S<:AbstractFloat} = promote_type(T,S)\n```\nThe first rule says that promoting a rational number with any other integer type promotes to a rational type whose numerator/denominator type is the result of promotion of its numerator/denominator type with the other integer type. The second rule applies the same logic to two different types of rational numbers, resulting in a rational of the promotion of their respective numerator/denominator types. The third and final rule dictates that promoting a rational with a float results in the same type as promoting the numerator/denominator type with the float."}
{"text": "### [Case Study: Rational Promotions](#Case-Study:-Rational-Promotions)\nThis small handful of promotion rules, together with the type's constructors and the default `convert` method for numbers, are sufficient to make rational numbers interoperate completely naturally with all of Julia's other numeric types – integers, floating-point numbers, and complex numbers. By providing appropriate conversion methods and promotion rules in the same manner, any user-defined numeric type can interoperate just as naturally with Julia's predefined numerics.\n------------------------------------------------------------------------"}
{"text": "# Interfaces · The Julia Language\nSource: https://docs.julialang.org/en/v1/manual/interfaces/"}
{"text": "# [Interfaces](#Interfaces)\nA lot of the power and extensibility in Julia comes from a collection of informal interfaces. By extending a few specific methods to work for a custom type, objects of that type not only receive those functionalities, but they are also able to be used in other methods that are written to generically build upon those behaviors."}
{"text": "## [Iteration](#man-interface-iteration)\nThere are two methods that are always required:\n| Required method | Brief description |\n|:--------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|\n| [`iterate(iter)`](../../base/collections/#Base.iterate) | Returns either a tuple of the first item and initial state or [`nothing`](../../base/constants/#Core.nothing) if empty |\n| `iterate(iter, state)` | Returns either a tuple of the next item and next state or `nothing` if no items remain |\nThere are several more methods that should be defined in some circumstances. Please note that you should always define at least one of `Base.IteratorSize(IterType)` and `length(iter)` because the default definition of `Base.IteratorSize(IterType)` is `Base.HasLength()`."}
{"text": "## [Iteration](#man-interface-iteration)\n| Method | When should this method be defined? | Default definition | Brief description |\n|:-------------------------------------------------------------------------------|:----------------------------------------------------------------------------|:-------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [`Base.IteratorSize(IterType)`](../../base/collections/#Base.IteratorSize) | If default is not appropriate | `Base.HasLength()` | One of `Base.HasLength()`, `Base.HasShape{N}()`, `Base.IsInfinite()`, or `Base.SizeUnknown()` as appropriate |\n| [`length(iter)`](../../base/arrays/#Base.length-Tuple%7BAbstractArray%7D) | If `Base.IteratorSize()` returns `Base.HasLength()` or `Base.HasShape{N}()` | (*undefined*) | The number of items, if known |\n| [`size(iter, [dim])`](../../base/arrays/#Base.size) | If `Base.IteratorSize()` returns `Base.HasShape{N}()` | (*undefined*) | The number of items in each dimension, if known |\n| [`Base.IteratorEltype(IterType)`](../../base/collections/#Base.IteratorEltype) | If default is not appropriate | `Base.HasEltype()` | Either `Base.EltypeUnknown()` or `Base.HasEltype()` as appropriate |\n| [`eltype(IterType)`](../../base/collections/#Base.eltype) | If default is not appropriate | `Any` | The type of the first entry of the tuple returned by `iterate()` |\n| [`Base.isdone(iter, [state])`](../../base/collections/#Base.isdone) | **Must** be defined if iterator is stateful | `missing` | Fast-path hint for iterator completion. If not defined for a stateful iterator then functions that check for done-ness, like `isempty()` and `zip()`, may mutate the iterator and cause buggy behaviour! |"}
{"text": "## [Iteration](#man-interface-iteration)\nSequential iteration is implemented by the [`iterate`](../../base/collections/#Base.iterate) function. Instead of mutating objects as they are iterated over, Julia iterators may keep track of the iteration state externally from the object. The return value from iterate is always either a tuple of a value and a state, or `nothing` if no elements remain. The state object will be passed back to the iterate function on the next iteration and is generally considered an implementation detail private to the iterable object.\nAny object that defines this function is iterable and can be used in the [many functions that rely upon iteration](../../base/collections/#lib-collections-iteration). It can also be used directly in a [`for`](../../base/base/#for) loop since the syntax:\n```julia\nfor item in iter # or \"for item = iter\"\n # body\nend\n```\nis translated into:\n```julia\nnext = iterate(iter)\nwhile next !== nothing\n (item, state) = next\n # body\n next = iterate(iter, state)\nend\n```\nA simple example is an iterable sequence of square numbers with a defined length:\n```julia-repl\njulia> struct Squares\n count::Int\n end\n\njulia> Base.iterate(S::Squares, state=1) = state > S.count ? nothing : (state*state, state+1)\n```\nWith only [`iterate`](../../base/collections/#Base.iterate) definition, the `Squares` type is already pretty powerful. We can iterate over all the elements:\n```julia-repl\njulia> for item in Squares(7)\n println(item)\n end\n1\n4\n9\n16\n25\n36\n49\n```"}
{"text": "## [Iteration](#man-interface-iteration)\nWe can use many of the builtin methods that work with iterables, like [`in`](../../base/collections/#Base.in) or [`sum`](../../base/collections/#Base.sum):\n```julia-repl\njulia> 25 in Squares(10)\ntrue\n\njulia> sum(Squares(100))\n338350\n```\nThere are a few more methods we can extend to give Julia more information about this iterable collection. We know that the elements in a `Squares` sequence will always be `Int`. By extending the [`eltype`](../../base/collections/#Base.eltype) method, we can give that information to Julia and help it make more specialized code in the more complicated methods. We also know the number of elements in our sequence, so we can extend [`length`](../../base/collections/#Base.length), too:\n```julia-repl\njulia> Base.eltype(::Type{Squares}) = Int # Note that this is defined for the type\n\njulia> Base.length(S::Squares) = S.count\n```\nNow, when we ask Julia to [`collect`](../../base/collections/#Base.collect-Tuple%7BAny%7D) all the elements into an array it can preallocate a `Vector{Int}` of the right size instead of naively [`push!`](../../base/collections/#Base.push!)ing each element into a `Vector{Any}`:\n```julia-repl\njulia> collect(Squares(4))\n4-element Vector{Int64}:\n 1\n 4\n 9\n 16\n```\nWhile we can rely upon generic implementations, we can also extend specific methods where we know there is a simpler algorithm. For example, there's a formula to compute the sum of squares, so we can override the generic iterative version with a more performant solution:"}
{"text": "## [Iteration](#man-interface-iteration)\n```julia-repl\njulia> Base.sum(S::Squares) = (n = S.count; return n*(n+1)*(2n+1)÷6)\n\njulia> sum(Squares(1803))\n1955361914\n```\nThis is a very common pattern throughout Julia Base: a small set of required methods define an informal interface that enable many fancier behaviors. In some cases, types will want to additionally specialize those extra behaviors when they know a more efficient algorithm can be used in their specific case.\nIt is also often useful to allow iteration over a collection in *reverse order* by iterating over [`Iterators.reverse(iterator)`](../../base/iterators/#Base.Iterators.reverse). To actually support reverse-order iteration, however, an iterator type `T` needs to implement `iterate` for `Iterators.Reverse{T}`. (Given `r::Iterators.Reverse{T}`, the underling iterator of type `T` is `r.itr`.) In our `Squares` example, we would implement `Iterators.Reverse{Squares}` methods:\n```julia-repl\njulia> Base.iterate(rS::Iterators.Reverse{Squares}, state=rS.itr.count) = state < 1 ? nothing : (state*state, state-1)\n\njulia> collect(Iterators.reverse(Squares(4)))\n4-element Vector{Int64}:\n 16\n 9\n 4\n 1\n```"}
{"text": "## [Indexing](#Indexing)\n| Methods to implement | Brief description |\n|:---------------------|:--------------------------------------------------------------|\n| `getindex(X, i)` | `X[i]`, indexed access, non-scalar `i` should allocate a copy |\n| `setindex!(X, v, i)` | `X[i] = v`, indexed assignment |\n| `firstindex(X)` | The first index, used in `X[begin]` |\n| `lastindex(X)` | The last index, used in `X[end]` |\nFor the `Squares` iterable above, we can easily compute the `i`th element of the sequence by squaring it. We can expose this as an indexing expression `S[i]`. To opt into this behavior, `Squares` simply needs to define [`getindex`](../../base/collections/#Base.getindex):\n```julia-repl\njulia> function Base.getindex(S::Squares, i::Int)\n 1 <= i <= S.count || throw(BoundsError(S, i))\n return i*i\n end\n\njulia> Squares(100)[23]\n529\n```\nAdditionally, to support the syntax `S[begin]` and `S[end]`, we must define [`firstindex`](../../base/collections/#Base.firstindex) and [`lastindex`](../../base/collections/#Base.lastindex) to specify the first and last valid indices, respectively:\n```julia-repl\njulia> Base.firstindex(S::Squares) = 1\n\njulia> Base.lastindex(S::Squares) = length(S)\n\njulia> Squares(23)[end]\n529\n```"}
{"text": "## [Indexing](#Indexing)\nFor multi-dimensional `begin`/`end` indexing as in `a[3, begin, 7]`, for example, you should define `firstindex(a, dim)` and `lastindex(a, dim)` (which default to calling `first` and `last` on `axes(a, dim)`, respectively).\nNote, though, that the above *only* defines [`getindex`](../../base/collections/#Base.getindex) with one integer index. Indexing with anything other than an `Int` will throw a [`MethodError`](../../base/base/#Core.MethodError) saying that there was no matching method. In order to support indexing with ranges or vectors of `Int`s, separate methods must be written:\n```julia-repl\njulia> Base.getindex(S::Squares, i::Number) = S[convert(Int, i)]\n\njulia> Base.getindex(S::Squares, I) = [S[i] for i in I]\n\njulia> Squares(10)[[3,4.,5]]\n3-element Vector{Int64}:\n 9\n 16\n 25\n```\nWhile this is starting to support more of the [indexing operations supported by some of the builtin types](../arrays/#man-array-indexing), there's still quite a number of behaviors missing. This `Squares` sequence is starting to look more and more like a vector as we've added behaviors to it. Instead of defining all these behaviors ourselves, we can officially define it as a subtype of an [`AbstractArray`](../../base/arrays/#Core.AbstractArray)."}
{"text": "## [Abstract Arrays](#man-interface-array)\n| Methods to implement | | Brief description |\n|:-----------------------------------------|:---------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `size(A)` | | Returns a tuple containing the dimensions of `A` |\n| `getindex(A, i::Int)` | | (if `IndexLinear`) Linear scalar indexing |\n| `getindex(A, I::Vararg{Int, N})` | | (if `IndexCartesian`, where `N = ndims(A)`) N-dimensional scalar indexing |\n| **Optional methods** | **Default definition** | **Brief description** |\n| `IndexStyle(::Type)` | `IndexCartesian()` | Returns either `IndexLinear()` or `IndexCartesian()`. See the description below. |\n| `setindex!(A, v, i::Int)` | | (if `IndexLinear`) Scalar indexed assignment |\n| `setindex!(A, v, I::Vararg{Int, N})` | | (if `IndexCartesian`, where `N = ndims(A)`) N-dimensional scalar indexed assignment |\n| `getindex(A, I...)` | defined in terms of scalar `getindex` | [Multidimensional and nonscalar indexing](../arrays/#man-array-indexing) |\n| `setindex!(A, X, I...)` | defined in terms of scalar `setindex!` | [Multidimensional and nonscalar indexed assignment](../arrays/#man-array-indexing) |\n| `iterate` | defined in terms of scalar `getindex` | Iteration |\n| `length(A)` | `prod(size(A))` | Number of elements |\n| `similar(A)` | `similar(A, eltype(A), size(A))` | Return a mutable array with the same shape and element type |\n| `similar(A, ::Type{S})` | `similar(A, S, size(A))` | Return a mutable array with the same shape and the specified element type |\n| `similar(A, dims::Dims)` | `similar(A, eltype(A), dims)` | Return a mutable array with the same element type and size *dims* |\n| `similar(A, ::Type{S}, dims::Dims)` | `Array{S}(undef, dims)` | Return a mutable array with the specified element type and size |\n| **Non-traditional indices** | **Default definition** | **Brief description** |\n| `axes(A)` | `map(OneTo, size(A))` | Return a tuple of `AbstractUnitRange{<:Integer}` of valid indices. The axes should be their own axes, that is `axes.(axes(A),1) == axes(A)` should be satisfied. |\n| `similar(A, ::Type{S}, inds)` | `similar(A, S, Base.to_shape(inds))` | Return a mutable array with the specified indices `inds` (see below) |\n| `similar(T::Union{Type,Function}, inds)` | `T(Base.to_shape(inds))` | Return an array similar to `T` with the specified indices `inds` (see below) |"}
{"text": "## [Abstract Arrays](#man-interface-array)\nIf a type is defined as a subtype of `AbstractArray`, it inherits a very large set of rich behaviors including iteration and multidimensional indexing built on top of single-element access. See the [arrays manual page](../arrays/#man-multi-dim-arrays) and the [Julia Base section](../../base/arrays/#lib-arrays) for more supported methods.\nA key part in defining an `AbstractArray` subtype is [`IndexStyle`](../../base/arrays/#Base.IndexStyle). Since indexing is such an important part of an array and often occurs in hot loops, it's important to make both indexing and indexed assignment as efficient as possible. Array data structures are typically defined in one of two ways: either it most efficiently accesses its elements using just one index (linear indexing) or it intrinsically accesses the elements with indices specified for every dimension. These two modalities are identified by Julia as `IndexLinear()` and `IndexCartesian()`. Converting a linear index to multiple indexing subscripts is typically very expensive, so this provides a traits-based mechanism to enable efficient generic code for all array types."}
{"text": "## [Abstract Arrays](#man-interface-array)\nThis distinction determines which scalar indexing methods the type must define. `IndexLinear()` arrays are simple: just define `getindex(A::ArrayType, i::Int)`. When the array is subsequently indexed with a multidimensional set of indices, the fallback `getindex(A::AbstractArray, I...)` efficiently converts the indices into one linear index and then calls the above method. `IndexCartesian()` arrays, on the other hand, require methods to be defined for each supported dimensionality with `ndims(A)` `Int` indices. For example, [`SparseMatrixCSC`](../../stdlib/SparseArrays/#SparseArrays.SparseMatrixCSC) from the `SparseArrays` standard library module, only supports two dimensions, so it just defines `getindex(A::SparseMatrixCSC, i::Int, j::Int)`. The same holds for [`setindex!`](../../base/collections/#Base.setindex!).\nReturning to the sequence of squares from above, we could instead define it as a subtype of an `AbstractArray{Int, 1}`:\n```julia-repl\njulia> struct SquaresVector <: AbstractArray{Int, 1}\n count::Int\n end\n\njulia> Base.size(S::SquaresVector) = (S.count,)\n\njulia> Base.IndexStyle(::Type{<:SquaresVector}) = IndexLinear()\n\njulia> Base.getindex(S::SquaresVector, i::Int) = i*i\n```"}
{"text": "## [Abstract Arrays](#man-interface-array)\nNote that it's very important to specify the two parameters of the `AbstractArray`; the first defines the [`eltype`](../../base/collections/#Base.eltype), and the second defines the [`ndims`](../../base/arrays/#Base.ndims). That supertype and those three methods are all it takes for `SquaresVector` to be an iterable, indexable, and completely functional array:\n```julia-repl\njulia> s = SquaresVector(4)\n4-element SquaresVector:\n 1\n 4\n 9\n 16\n\njulia> s[s .> 8]\n2-element Vector{Int64}:\n 9\n 16\n\njulia> s + s\n4-element Vector{Int64}:\n 2\n 8\n 18\n 32\n\njulia> sin.(s)\n4-element Vector{Float64}:\n 0.8414709848078965\n -0.7568024953079282\n 0.4121184852417566\n -0.2879033166650653\n```\nAs a more complicated example, let's define our own toy N-dimensional sparse-like array type built on top of [`Dict`](../../base/collections/#Base.Dict):\n```julia-repl\njulia> struct SparseArray{T,N} <: AbstractArray{T,N}\n data::Dict{NTuple{N,Int}, T}\n dims::NTuple{N,Int}\n end\n\njulia> SparseArray(::Type{T}, dims::Int...) where {T} = SparseArray(T, dims);\n\njulia> SparseArray(::Type{T}, dims::NTuple{N,Int}) where {T,N} = SparseArray{T,N}(Dict{NTuple{N,Int}, T}(), dims);\n\njulia> Base.size(A::SparseArray) = A.dims\n\njulia> Base.similar(A::SparseArray, ::Type{T}, dims::Dims) where {T} = SparseArray(T, dims)\n\njulia> Base.getindex(A::SparseArray{T,N}, I::Vararg{Int,N}) where {T,N} = get(A.data, I, zero(T))\n\njulia> Base.setindex!(A::SparseArray{T,N}, v, I::Vararg{Int,N}) where {T,N} = (A.data[I] = v)\n```"}
{"text": "## [Abstract Arrays](#man-interface-array)\nNotice that this is an `IndexCartesian` array, so we must manually define [`getindex`](../../base/collections/#Base.getindex) and [`setindex!`](../../base/collections/#Base.setindex!) at the dimensionality of the array. Unlike the `SquaresVector`, we are able to define [`setindex!`](../../base/collections/#Base.setindex!), and so we can mutate the array:\n```julia-repl\njulia> A = SparseArray(Float64, 3, 3)\n3×3 SparseArray{Float64, 2}:\n 0.0 0.0 0.0\n 0.0 0.0 0.0\n 0.0 0.0 0.0\n\njulia> fill!(A, 2)\n3×3 SparseArray{Float64, 2}:\n 2.0 2.0 2.0\n 2.0 2.0 2.0\n 2.0 2.0 2.0\n\njulia> A[:] = 1:length(A); A\n3×3 SparseArray{Float64, 2}:\n 1.0 4.0 7.0\n 2.0 5.0 8.0\n 3.0 6.0 9.0\n```\nThe result of indexing an `AbstractArray` can itself be an array (for instance when indexing by an `AbstractRange`). The `AbstractArray` fallback methods use [`similar`](../../base/arrays/#Base.similar) to allocate an `Array` of the appropriate size and element type, which is filled in using the basic indexing method described above. However, when implementing an array wrapper you often want the result to be wrapped as well:\n```julia-repl\njulia> A[1:2,:]\n2×3 SparseArray{Float64, 2}:\n 1.0 4.0 7.0\n 2.0 5.0 8.0\n```"}
{"text": "## [Abstract Arrays](#man-interface-array)\nIn this example it is accomplished by defining `Base.similar(A::SparseArray, ::Type{T}, dims::Dims) where T` to create the appropriate wrapped array. (Note that while `similar` supports 1- and 2-argument forms, in most case you only need to specialize the 3-argument form.) For this to work it's important that `SparseArray` is mutable (supports `setindex!`). Defining `similar`, `getindex` and `setindex!` for `SparseArray` also makes it possible to [`copy`](../../base/base/#Base.copy) the array:\n```julia-repl\njulia> copy(A)\n3×3 SparseArray{Float64, 2}:\n 1.0 4.0 7.0\n 2.0 5.0 8.0\n 3.0 6.0 9.0\n```\nIn addition to all the iterable and indexable methods from above, these types can also interact with each other and use most of the methods defined in Julia Base for `AbstractArrays`:\n```julia-repl\njulia> A[SquaresVector(3)]\n3-element SparseArray{Float64, 1}:\n 1.0\n 4.0\n 9.0\n\njulia> sum(A)\n45.0\n```\nIf you are defining an array type that allows non-traditional indexing (indices that start at something other than 1), you should specialize [`axes`](../../base/arrays/#Base.axes-Tuple%7BAny%7D). You should also specialize [`similar`](../../base/arrays/#Base.similar) so that the `dims` argument (ordinarily a `Dims` size-tuple) can accept `AbstractUnitRange` objects, perhaps range-types `Ind` of your own design. For more information, see [Arrays with custom indices](../../devdocs/offset-arrays/#man-custom-indices)."}
{"text": "## [Strided Arrays](#man-interface-strided-arrays)\n| Methods to implement | | Brief description |\n|:-----------------------------------------|:-----------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `strides(A)` | | Return the distance in memory (in number of elements) between adjacent elements in each dimension as a tuple. If `A` is an `AbstractArray{T,0}`, this should return an empty tuple. |\n| `Base.unsafe_convert(::Type{Ptr{T}}, A)` | | Return the native address of an array. |\n| `Base.elsize(::Type{<:A})` | | Return the stride between consecutive elements in the array. |\n| **Optional methods** | **Default definition** | **Brief description** |\n| `stride(A, i::Int)` | `strides(A)[i]` | Return the distance in memory (in number of elements) between adjacent elements in dimension k. |"}
{"text": "## [Strided Arrays](#man-interface-strided-arrays)\nA strided array is a subtype of `AbstractArray` whose entries are stored in memory with fixed strides. Provided the element type of the array is compatible with BLAS, a strided array can utilize BLAS and LAPACK routines for more efficient linear algebra routines. A typical example of a user-defined strided array is one that wraps a standard `Array` with additional structure.\nWarning: do not implement these methods if the underlying storage is not actually strided, as it may lead to incorrect results or segmentation faults.\nHere are some examples to demonstrate which type of arrays are strided and which are not:\n```julia\n1:5 # not strided (there is no storage associated with this array.)\nVector(1:5) # is strided with strides (1,)\nA = [1 5; 2 6; 3 7; 4 8] # is strided with strides (1,4)\nV = view(A, 1:2, :) # is strided with strides (1,4)\nV = view(A, 1:2:3, 1:2) # is strided with strides (2,4)\nV = view(A, [1,2,4], :) # is not strided, as the spacing between rows is not fixed.\n```"}
{"text": "## [Customizing broadcasting](#man-interfaces-broadcasting)\n| Methods to implement | Brief description |\n|:-----------------------------------------------------------|:----------------------------------------------------------------------------------------------------|\n| `Base.BroadcastStyle(::Type{SrcType}) = SrcStyle()` | Broadcasting behavior of `SrcType` |\n| `Base.similar(bc::Broadcasted{DestStyle}, ::Type{ElType})` | Allocation of output container |\n| **Optional methods** | |\n| `Base.BroadcastStyle(::Style1, ::Style2) = Style12()` | Precedence rules for mixing styles |\n| `Base.axes(x)` | Declaration of the indices of `x`, as per [`axes(x)`](../../base/arrays/#Base.axes-Tuple%7BAny%7D). |\n| `Base.broadcastable(x)` | Convert `x` to an object that has `axes` and supports indexing |\n| **Bypassing default machinery** | |\n| `Base.copy(bc::Broadcasted{DestStyle})` | Custom implementation of `broadcast` |\n| `Base.copyto!(dest, bc::Broadcasted{DestStyle})` | Custom implementation of `broadcast!`, specializing on `DestStyle` |\n| `Base.copyto!(dest::DestType, bc::Broadcasted{Nothing})` | Custom implementation of `broadcast!`, specializing on `DestType` |\n| `Base.Broadcast.broadcasted(f, args...)` | Override the default lazy behavior within a fused expression |\n| `Base.Broadcast.instantiate(bc::Broadcasted{DestStyle})` | Override the computation of the lazy broadcast's axes |"}
{"text": "## [Customizing broadcasting](#man-interfaces-broadcasting)\n[Broadcasting](../arrays/#Broadcasting) is triggered by an explicit call to `broadcast` or `broadcast!`, or implicitly by \"dot\" operations like `A .+ b` or `f.(x, y)`. Any object that has [`axes`](../../base/arrays/#Base.axes-Tuple%7BAny%7D) and supports indexing can participate as an argument in broadcasting, and by default the result is stored in an `Array`. This basic framework is extensible in three major ways:\n- Ensuring that all arguments support broadcast\n- Selecting an appropriate output array for the given set of arguments\n- Selecting an efficient implementation for the given set of arguments\nNot all types support `axes` and indexing, but many are convenient to allow in broadcast. The [`Base.broadcastable`](../../base/arrays/#Base.Broadcast.broadcastable) function is called on each argument to broadcast, allowing it to return something different that supports `axes` and indexing. By default, this is the identity function for all `AbstractArray`s and `Number`s — they already support `axes` and indexing.\nIf a type is intended to act like a \"0-dimensional scalar\" (a single object) rather than as a container for broadcasting, then the following method should be defined:\n```julia\nBase.broadcastable(o::MyType) = Ref(o)\n```"}
{"text": "## [Customizing broadcasting](#man-interfaces-broadcasting)\nthat returns the argument wrapped in a 0-dimensional [`Ref`](../../base/c/#Core.Ref) container. For example, such a wrapper method is defined for types themselves, functions, special singletons like [`missing`](../missing/#missing) and [`nothing`](../../base/constants/#Core.nothing), and dates.\nCustom array-like types can specialize `Base.broadcastable` to define their shape, but they should follow the convention that `collect(Base.broadcastable(x)) == collect(x)`. A notable exception is `AbstractString`; strings are special-cased to behave as scalars for the purposes of broadcast even though they are iterable collections of their characters (see [Strings](../../devdocs/ast/#Strings) for more).\nThe next two steps (selecting the output array and implementation) are dependent upon determining a single answer for a given set of arguments. Broadcast must take all the varied types of its arguments and collapse them down to just one output array and one implementation. Broadcast calls this single answer a \"style\". Every broadcastable object each has its own preferred style, and a promotion-like system is used to combine these styles into a single answer — the \"destination style\"."}
{"text": "### [Broadcast Styles](#Broadcast-Styles)\n`Base.BroadcastStyle` is the abstract type from which all broadcast styles are derived. When used as a function it has two possible forms, unary (single-argument) and binary. The unary variant states that you intend to implement specific broadcasting behavior and/or output type, and do not wish to rely on the default fallback [`Broadcast.DefaultArrayStyle`](../../base/arrays/#Base.Broadcast.DefaultArrayStyle).\nTo override these defaults, you can define a custom `BroadcastStyle` for your object:\n```julia\nstruct MyStyle <: Broadcast.BroadcastStyle end\nBase.BroadcastStyle(::Type{<:MyType}) = MyStyle()\n```\nIn some cases it might be convenient not to have to define `MyStyle`, in which case you can leverage one of the general broadcast wrappers:\n- `Base.BroadcastStyle(::Type{<:MyType}) = Broadcast.Style{MyType}()` can be used for arbitrary types.\n- `Base.BroadcastStyle(::Type{<:MyType}) = Broadcast.ArrayStyle{MyType}()` is preferred if `MyType` is an `AbstractArray`.\n- For `AbstractArrays` that only support a certain dimensionality, create a subtype of `Broadcast.AbstractArrayStyle{N}` (see below).\nWhen your broadcast operation involves several arguments, individual argument styles get combined to determine a single `DestStyle` that controls the type of the output container. For more details, see [below](#writing-binary-broadcasting-rules)."}
{"text": "### [Selecting an appropriate output array](#Selecting-an-appropriate-output-array)\nThe broadcast style is computed for every broadcasting operation to allow for dispatch and specialization. The actual allocation of the result array is handled by `similar`, using the Broadcasted object as its first argument.\n```julia\nBase.similar(bc::Broadcasted{DestStyle}, ::Type{ElType})\n```\nThe fallback definition is\n```julia\nsimilar(bc::Broadcasted{DefaultArrayStyle{N}}, ::Type{ElType}) where {N,ElType} =\n similar(Array{ElType}, axes(bc))\n```\nHowever, if needed you can specialize on any or all of these arguments. The final argument `bc` is a lazy representation of a (potentially fused) broadcast operation, a `Broadcasted` object. For these purposes, the most important fields of the wrapper are `f` and `args`, describing the function and argument list, respectively. Note that the argument list can — and often does — include other nested `Broadcasted` wrappers.\nFor a complete example, let's say you have created a type, `ArrayAndChar`, that stores an array and a single character:\n```julia\nstruct ArrayAndChar{T,N} <: AbstractArray{T,N}\n data::Array{T,N}\n char::Char\nend\nBase.size(A::ArrayAndChar) = size(A.data)\nBase.getindex(A::ArrayAndChar{T,N}, inds::Vararg{Int,N}) where {T,N} = A.data[inds...]\nBase.setindex!(A::ArrayAndChar{T,N}, val, inds::Vararg{Int,N}) where {T,N} = A.data[inds...] = val\nBase.showarg(io::IO, A::ArrayAndChar, toplevel) = print(io, typeof(A), \" with char '\", A.char, \"'\")\n```\nYou might want broadcasting to preserve the `char` \"metadata\". First we define"}
{"text": "### [Selecting an appropriate output array](#Selecting-an-appropriate-output-array)\n```julia\nBase.BroadcastStyle(::Type{<:ArrayAndChar}) = Broadcast.ArrayStyle{ArrayAndChar}()\n```\nThis means we must also define a corresponding `similar` method:\n```julia\nfunction Base.similar(bc::Broadcast.Broadcasted{Broadcast.ArrayStyle{ArrayAndChar}}, ::Type{ElType}) where ElType\n # Scan the inputs for the ArrayAndChar:\n A = find_aac(bc)\n # Use the char field of A to create the output\n ArrayAndChar(similar(Array{ElType}, axes(bc)), A.char)\nend\n\n\"`A = find_aac(As)` returns the first ArrayAndChar among the arguments.\"\nfind_aac(bc::Base.Broadcast.Broadcasted) = find_aac(bc.args)\nfind_aac(args::Tuple) = find_aac(find_aac(args[1]), Base.tail(args))\nfind_aac(x) = x\nfind_aac(::Tuple{}) = nothing\nfind_aac(a::ArrayAndChar, rest) = a\nfind_aac(::Any, rest) = find_aac(rest)\n```\nFrom these definitions, one obtains the following behavior:\n```julia-repl\njulia> a = ArrayAndChar([1 2; 3 4], 'x')\n2×2 ArrayAndChar{Int64, 2} with char 'x':\n 1 2\n 3 4\n\njulia> a .+ 1\n2×2 ArrayAndChar{Int64, 2} with char 'x':\n 2 3\n 4 5\n\njulia> a .+ [5,10]\n2×2 ArrayAndChar{Int64, 2} with char 'x':\n 6 7\n 13 14\n```"}
{"text": "### [Extending broadcast with custom implementations](#extending-in-place-broadcast)\nIn general, a broadcast operation is represented by a lazy `Broadcasted` container that holds onto the function to be applied alongside its arguments. Those arguments may themselves be more nested `Broadcasted` containers, forming a large expression tree to be evaluated. A nested tree of `Broadcasted` containers is directly constructed by the implicit dot syntax; `5 .+ 2.*x` is transiently represented by `Broadcasted(+, 5, Broadcasted(*, 2, x))`, for example. This is invisible to users as it is immediately realized through a call to `copy`, but it is this container that provides the basis for broadcast's extensibility for authors of custom types. The built-in broadcast machinery will then determine the result type and size based upon the arguments, allocate it, and then finally copy the realization of the `Broadcasted` object into it with a default `copyto!(::AbstractArray, ::Broadcasted)` method. The built-in fallback `broadcast` and `broadcast!` methods similarly construct a transient `Broadcasted` representation of the operation so they can follow the same codepath. This allows custom array implementations to provide their own `copyto!` specialization to customize and optimize broadcasting. This is again determined by the computed broadcast style. This is such an important part of the operation that it is stored as the first type parameter of the `Broadcasted` type, allowing for dispatch and specialization."}
{"text": "### [Extending broadcast with custom implementations](#extending-in-place-broadcast)\nFor some types, the machinery to \"fuse\" operations across nested levels of broadcasting is not available or could be done more efficiently incrementally. In such cases, you may need or want to evaluate `x .* (x .+ 1)` as if it had been written `broadcast(*, x, broadcast(+, x, 1))`, where the inner operation is evaluated before tackling the outer operation. This sort of eager operation is directly supported by a bit of indirection; instead of directly constructing `Broadcasted` objects, Julia lowers the fused expression `x .* (x .+ 1)` to `Broadcast.broadcasted(*, x, Broadcast.broadcasted(+, x, 1))`. Now, by default, `broadcasted` just calls the `Broadcasted` constructor to create the lazy representation of the fused expression tree, but you can choose to override it for a particular combination of function and arguments.\nAs an example, the builtin `AbstractRange` objects use this machinery to optimize pieces of broadcasted expressions that can be eagerly evaluated purely in terms of the start, step, and length (or stop) instead of computing every single element. Just like all the other machinery, `broadcasted` also computes and exposes the combined broadcast style of its arguments, so instead of specializing on `broadcasted(f, args...)`, you can specialize on `broadcasted(::DestStyle, f, args...)` for any combination of style, function, and arguments.\nFor example, the following definition supports the negation of ranges:"}
{"text": "### [Extending broadcast with custom implementations](#extending-in-place-broadcast)\n```julia\nbroadcasted(::DefaultArrayStyle{1}, ::typeof(-), r::OrdinalRange) = range(-first(r), step=-step(r), length=length(r))\n```"}
{"text": "### [Extending in-place broadcasting](#extending-in-place-broadcast-2)\nIn-place broadcasting can be supported by defining the appropriate `copyto!(dest, bc::Broadcasted)` method. Because you might want to specialize either on `dest` or the specific subtype of `bc`, to avoid ambiguities between packages we recommend the following convention.\nIf you wish to specialize on a particular style `DestStyle`, define a method for\n```julia\ncopyto!(dest, bc::Broadcasted{DestStyle})\n```\nOptionally, with this form you can also specialize on the type of `dest`.\nIf instead you want to specialize on the destination type `DestType` without specializing on `DestStyle`, then you should define a method with the following signature:\n```julia\ncopyto!(dest::DestType, bc::Broadcasted{Nothing})\n```\nThis leverages a fallback implementation of `copyto!` that converts the wrapper into a `Broadcasted{Nothing}`. Consequently, specializing on `DestType` has lower precedence than methods that specialize on `DestStyle`.\nSimilarly, you can completely override out-of-place broadcasting with a `copy(::Broadcasted)` method."}
{"text": "#### [Working with Broadcasted objects](#Working-with-Broadcasted-objects)\nIn order to implement such a `copy` or `copyto!`, method, of course, you must work with the `Broadcasted` wrapper to compute each element. There are two main ways of doing so:\n- `Broadcast.flatten` recomputes the potentially nested operation into a single function and flat list of arguments. You are responsible for implementing the broadcasting shape rules yourself, but this may be helpful in limited situations.\n- Iterating over the `CartesianIndices` of the `axes(::Broadcasted)` and using indexing with the resulting `CartesianIndex` object to compute the result."}
{"text": "### [Writing binary broadcasting rules](#writing-binary-broadcasting-rules)\nThe precedence rules are defined by binary `BroadcastStyle` calls:\n```julia\nBase.BroadcastStyle(::Style1, ::Style2) = Style12()\n```\nwhere `Style12` is the `BroadcastStyle` you want to choose for outputs involving arguments of `Style1` and `Style2`. For example,\n```julia\nBase.BroadcastStyle(::Broadcast.Style{Tuple}, ::Broadcast.AbstractArrayStyle{0}) = Broadcast.Style{Tuple}()\n```\nindicates that `Tuple` \"wins\" over zero-dimensional arrays (the output container will be a tuple). It is worth noting that you do not need to (and should not) define both argument orders of this call; defining one is sufficient no matter what order the user supplies the arguments in.\nFor `AbstractArray` types, defining a `BroadcastStyle` supersedes the fallback choice, [`Broadcast.DefaultArrayStyle`](../../base/arrays/#Base.Broadcast.DefaultArrayStyle). `DefaultArrayStyle` and the abstract supertype, `AbstractArrayStyle`, store the dimensionality as a type parameter to support specialized array types that have fixed dimensionality requirements.\n`DefaultArrayStyle` \"loses\" to any other `AbstractArrayStyle` that has been defined because of the following methods:\n```julia\nBroadcastStyle(a::AbstractArrayStyle{Any}, ::DefaultArrayStyle) = a\nBroadcastStyle(a::AbstractArrayStyle{N}, ::DefaultArrayStyle{N}) where N = a\nBroadcastStyle(a::AbstractArrayStyle{M}, ::DefaultArrayStyle{N}) where {M,N} =\n typeof(a)(Val(max(M, N)))\n```"}
{"text": "### [Writing binary broadcasting rules](#writing-binary-broadcasting-rules)\nYou do not need to write binary `BroadcastStyle` rules unless you want to establish precedence for two or more non-`DefaultArrayStyle` types.\nIf your array type does have fixed dimensionality requirements, then you should subtype `AbstractArrayStyle`. For example, the sparse array code has the following definitions:\n```julia\nstruct SparseVecStyle <: Broadcast.AbstractArrayStyle{1} end\nstruct SparseMatStyle <: Broadcast.AbstractArrayStyle{2} end\nBase.BroadcastStyle(::Type{<:SparseVector}) = SparseVecStyle()\nBase.BroadcastStyle(::Type{<:SparseMatrixCSC}) = SparseMatStyle()\n```\nWhenever you subtype `AbstractArrayStyle`, you also need to define rules for combining dimensionalities, by creating a constructor for your style that takes a `Val(N)` argument. For example:\n```julia\nSparseVecStyle(::Val{0}) = SparseVecStyle()\nSparseVecStyle(::Val{1}) = SparseVecStyle()\nSparseVecStyle(::Val{2}) = SparseMatStyle()\nSparseVecStyle(::Val{N}) where N = Broadcast.DefaultArrayStyle{N}()\n```\nThese rules indicate that the combination of a `SparseVecStyle` with 0- or 1-dimensional arrays yields another `SparseVecStyle`, that its combination with a 2-dimensional array yields a `SparseMatStyle`, and anything of higher dimensionality falls back to the dense arbitrary-dimensional framework. These rules allow broadcasting to keep the sparse representation for operations that result in one or two dimensional outputs, but produce an `Array` for any other dimensionality."}
{"text": "## [Instance Properties](#man-instance-properties)\n| Methods to implement | Default definition | Brief description |\n|:-------------------------------------------------|:------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------|\n| `propertynames(x::ObjType, private::Bool=false)` | `fieldnames(typeof(x))` | Return a tuple of the properties (`x.property`) of an object `x`. If `private=true`, also return property names intended to be kept as private |\n| `getproperty(x::ObjType, s::Symbol)` | `getfield(x, s)` | Return property `s` of `x`. `x.s` calls `getproperty(x, :s)`. |\n| `setproperty!(x::ObjType, s::Symbol, v)` | `setfield!(x, s, v)` | Set property `s` of `x` to `v`. `x.s = v` calls `setproperty!(x, :s, v)`. Should return `v`. |\nSometimes, it is desirable to change how the end-user interacts with the fields of an object. Instead of granting direct access to type fields, an extra layer of abstraction between the user and the code can be provided by overloading `object.field`. Properties are what the user *sees of* the object, fields what the object *actually is*."}
{"text": "## [Instance Properties](#man-instance-properties)\nBy default, properties and fields are the same. However, this behavior can be changed. For example, take this representation of a point in a plane in [polar coordinates](https://en.wikipedia.org/wiki/Polar_coordinate_system):\n```julia-repl\njulia> mutable struct Point\n r::Float64\n ϕ::Float64\n end\n\njulia> p = Point(7.0, pi/4)\nPoint(7.0, 0.7853981633974483)\n```\nAs described in the table above dot access `p.r` is the same as `getproperty(p, :r)` which is by default the same as `getfield(p, :r)`:\n```julia-repl\njulia> propertynames(p)\n(:r, :ϕ)\n\njulia> getproperty(p, :r), getproperty(p, :ϕ)\n(7.0, 0.7853981633974483)\n\njulia> p.r, p.ϕ\n(7.0, 0.7853981633974483)\n\njulia> getfield(p, :r), getproperty(p, :ϕ)\n(7.0, 0.7853981633974483)\n```\nHowever, we may want users to be unaware that `Point` stores the coordinates as `r` and `ϕ` (fields), and instead interact with `x` and `y` (properties). The methods in the first column can be defined to add new functionality:"}
{"text": "## [Instance Properties](#man-instance-properties)\n```julia-repl\njulia> Base.propertynames(::Point, private::Bool=false) = private ? (:x, :y, :r, :ϕ) : (:x, :y)\n\njulia> function Base.getproperty(p::Point, s::Symbol)\n if s === :x\n return getfield(p, :r) * cos(getfield(p, :ϕ))\n elseif s === :y\n return getfield(p, :r) * sin(getfield(p, :ϕ))\n else\n # This allows accessing fields with p.r and p.ϕ\n return getfield(p, s)\n end\n end\n\njulia> function Base.setproperty!(p::Point, s::Symbol, f)\n if s === :x\n y = p.y\n setfield!(p, :r, sqrt(f^2 + y^2))\n setfield!(p, :ϕ, atan(y, f))\n return f\n elseif s === :y\n x = p.x\n setfield!(p, :r, sqrt(x^2 + f^2))\n setfield!(p, :ϕ, atan(f, x))\n return f\n else\n # This allow modifying fields with p.r and p.ϕ\n return setfield!(p, s, f)\n end\n end\n```\nIt is important that `getfield` and `setfield` are used inside `getproperty` and `setproperty!` instead of the dot syntax, since the dot syntax would make the functions recursive which can lead to type inference issues. We can now try out the new functionality:\n```julia-repl\njulia> propertynames(p)\n(:x, :y)\n\njulia> p.x\n4.949747468305833\n\njulia> p.y = 4.0\n4.0\n\njulia> p.r\n6.363961030678928\n```"}
{"text": "## [Instance Properties](#man-instance-properties)\nFinally, it is worth noting that adding instance properties like this is quite rarely done in Julia and should in general only be done if there is a good reason for doing so."}
{"text": "## [Rounding](#man-rounding-interface)\n| Methods to implement | Default definition | Brief description |\n|:----------------------------------------------|:--------------------------|:----------------------------------------------------------------------------------------------------|\n| `round(x::ObjType, r::RoundingMode)` | none | Round `x` and return the result. If possible, round should return an object of the same type as `x` |\n| `round(T::Type, x::ObjType, r::RoundingMode)` | `convert(T, round(x, r))` | Round `x`, returning the result as a `T` |\nTo support rounding on a new type it is typically sufficient to define the single method `round(x::ObjType, r::RoundingMode)`. The passed rounding mode determines in which direction the value should be rounded. The most commonly used rounding modes are `RoundNearest`, `RoundToZero`, `RoundDown`, and `RoundUp`, as these rounding modes are used in the definitions of the one argument `round`, method, and `trunc`, `floor`, and `ceil`, respectively."}
{"text": "## [Rounding](#man-rounding-interface)\nIn some cases, it is possible to define a three-argument `round` method that is more accurate or performant than the two-argument method followed by conversion. In this case it is acceptable to define the three argument method in addition to the two argument method. If it is impossible to represent the rounded result as an object of the type `T`, then the three argument method should throw an `InexactError`.\nFor example, if we have an `Interval` type which represents a range of possible values similar to https://github.com/JuliaPhysics/Measurements.jl, we may define rounding on that type with the following\n```julia-repl\njulia> struct Interval{T}\n min::T\n max::T\n end\n\njulia> Base.round(x::Interval, r::RoundingMode) = Interval(round(x.min, r), round(x.max, r))\n\njulia> x = Interval(1.7, 2.2)\nInterval{Float64}(1.7, 2.2)\n\njulia> round(x)\nInterval{Float64}(2.0, 2.0)\n\njulia> floor(x)\nInterval{Float64}(1.0, 2.0)\n\njulia> ceil(x)\nInterval{Float64}(2.0, 3.0)\n\njulia> trunc(x)\nInterval{Float64}(1.0, 2.0)\n```\n------------------------------------------------------------------------"}
{"text": "# Modules · The Julia Language\nSource: https://docs.julialang.org/en/v1/manual/modules/"}
{"text": "# [Modules](#modules)\nModules in Julia help organize code into coherent units. They are delimited syntactically inside `module NameOfModule ... end`, and have the following features:\n1. Modules are separate namespaces, each introducing a new global scope. This is useful, because it allows the same name to be used for different functions or global variables without conflict, as long as they are in separate modules.\n2. Modules have facilities for detailed namespace management: each defines a set of names it `export`s and marks as `public`, and can import names from other modules with `using` and `import` (we explain these below).\n3. Modules can be precompiled for faster loading, and may contain code for runtime initialization.\nTypically, in larger Julia packages you will see module code organized into files, eg\n```julia\nmodule SomeModule\n\n# export, public, using, import statements are usually here; we discuss these below\n\ninclude(\"file1.jl\")\ninclude(\"file2.jl\")\n\nend\n```\nFiles and file names are mostly unrelated to modules; modules are associated only with module expressions. One can have multiple files per module, and multiple modules per file. `include` behaves as if the contents of the source file were evaluated in the global scope of the including module. In this chapter, we use short and simplified examples, so we won't use `include`."}
{"text": "# [Modules](#modules)\nThe recommended style is not to indent the body of the module, since that would typically lead to whole files being indented. Also, it is common to use `UpperCamelCase` for module names (just like types), and use the plural form if applicable, especially if the module contains a similarly named identifier, to avoid name clashes. For example,\n```julia\nmodule FastThings\n\nstruct FastThing\n ...\nend\n\nend\n```"}
{"text": "## [Namespace management](#namespace-management)\nNamespace management refers to the facilities the language offers for making names in a module available in other modules. We discuss the related concepts and functionality below in detail."}
{"text": "### [Qualified names](#Qualified-names)\nNames for functions, variables and types in the global scope like `sin`, `ARGS`, and `UnitRange` always belong to a module, called the *parent module*, which can be found interactively with [`parentmodule`](../../base/base/#Base.parentmodule), for example\n```julia-repl\njulia> parentmodule(UnitRange)\nBase\n```\nOne can also refer to these names outside their parent module by prefixing them with their module, eg `Base.UnitRange`. This is called a *qualified name*. The parent module may be accessible using a chain of submodules like `Base.Math.sin`, where `Base.Math` is called the *module path*. Due to syntactic ambiguities, qualifying a name that contains only symbols, such as an operator, requires inserting a colon, e.g. `Base.:+`. A small number of operators additionally require parentheses, e.g. `Base.:(==)`.\nIf a name is qualified, then it is always *accessible*, and in case of a function, it can also have methods added to it by using the qualified name as the function name.\nWithin a module, a variable name can be “reserved” without assigning to it by declaring it as `global x`. This prevents name conflicts for globals initialized after load time. The syntax `M.x = y` does not work to assign a global in another module; global assignment is always module-local."}
{"text": "### [Export lists](#Export-lists)\nNames (referring to functions, types, global variables, and constants) can be added to the *export list* of a module with `export`: these are the symbols that are imported when `using` the module. Typically, they are at or near the top of the module definition so that readers of the source code can find them easily, as in\n```julia-repl\njulia> module NiceStuff\n export nice, DOG\n struct Dog end # singleton type, not exported\n const DOG = Dog() # named instance, exported\n nice(x) = \"nice $x\" # function, exported\n end;\n```\nbut this is just a style suggestion — a module can have multiple `export` statements in arbitrary locations.\nIt is common to export names which form part of the API (application programming interface). In the above code, the export list suggests that users should use `nice` and `DOG`. However, since qualified names always make identifiers accessible, this is just an option for organizing APIs: unlike other languages, Julia has no facilities for truly hiding module internals.\nAlso, some modules don't export names at all. This is usually done if they use common words, such as `derivative`, in their API, which could easily clash with the export lists of other modules. We will see how to manage name clashes below."}
{"text": "### [Export lists](#Export-lists)\nTo mark a name as public without exporting it into the namespace of folks who call `using NiceStuff`, one can use `public` instead of `export`. This marks the public name(s) as part of the public API, but does not have any namespace implications. The `public` keyword is only available in Julia 1.11 and above. To maintain compatibility with Julia 1.10 and below, use the `@compat` macro from the [Compat](https://github.com/JuliaLang/Compat.jl) package."}
{"text": "### [Standalone using and import](#Standalone-using-and-import)\nFor interactive use, the most common way of loading a module is `using ModuleName`. This [loads](../code-loading/#code-loading) the code associated with `ModuleName`, and brings\n1. the module name\n2. and the elements of the export list into the surrounding global namespace.\nTechnically, the statement `using ModuleName` means that a module called `ModuleName` will be available for resolving names as needed. When a global variable is encountered that has no definition in the current module, the system will search for it among variables exported by `ModuleName` and use it if it is found there. This means that all uses of that global within the current module will resolve to the definition of that variable in `ModuleName`.\nTo load a module from a package, the statement `using ModuleName` can be used. To load a module from a locally defined module, a dot needs to be added before the module name like `using .ModuleName`.\nTo continue with our example,\n```julia-repl\njulia> using .NiceStuff\n```\nwould load the above code, making `NiceStuff` (the module name), `DOG` and `nice` available. `Dog` is not on the export list, but it can be accessed if the name is qualified with the module path (which here is just the module name) as `NiceStuff.Dog`.\nImportantly, **`using ModuleName` is the only form for which export lists matter at all**.\nIn contrast,\n```julia-repl\njulia> import .NiceStuff\n```"}
{"text": "### [Standalone using and import](#Standalone-using-and-import)\nbrings *only* the module name into scope. Users would need to use `NiceStuff.DOG`, `NiceStuff.Dog`, and `NiceStuff.nice` to access its contents. Usually, `import ModuleName` is used in contexts when the user wants to keep the namespace clean. As we will see in the next section `import .NiceStuff` is equivalent to `using .NiceStuff: NiceStuff`.\nYou can combine multiple `using` and `import` statements of the same kind in a comma-separated expression, e.g.\n```julia-repl\njulia> using LinearAlgebra, Random\n```"}
{"text": "### [using and import with specific identifiers, and adding methods](#using-and-import-with-specific-identifiers,-and-adding-methods)\nWhen `using ModuleName:` or `import ModuleName:` is followed by a comma-separated list of names, the module is loaded, but *only those specific names are brought into the namespace* by the statement. For example,\n```julia-repl\njulia> using .NiceStuff: nice, DOG\n```\nwill import the names `nice` and `DOG`.\nImportantly, the module name `NiceStuff` will *not* be in the namespace. If you want to make it accessible, you have to list it explicitly, as\n```julia-repl\njulia> using .NiceStuff: nice, DOG, NiceStuff\n```\nWhen two or more packages/modules export a name and that name does not refer to the same thing in each of the packages, and the packages are loaded via `using` without an explicit list of names, it is an error to reference that name without qualification. It is thus recommended that code intended to be forward-compatible with future versions of its dependencies and of Julia, e.g., code in released packages, list the names it uses from each loaded package, e.g., `using Foo: Foo, f` rather than `using Foo`.\nJulia has two forms for seemingly the same thing because only `import ModuleName: f` allows adding methods to `f` *without a module path*. That is to say, the following example will give an error:"}
{"text": "### [using and import with specific identifiers, and adding methods](#using-and-import-with-specific-identifiers,-and-adding-methods)\n```julia-repl\njulia> using .NiceStuff: nice\n\njulia> struct Cat end\n\njulia> nice(::Cat) = \"nice 😸\"\nERROR: invalid method definition in Main: function NiceStuff.nice must be explicitly imported to be extended\nStacktrace:\n [1] top-level scope\n @ none:0\n [2] top-level scope\n @ none:1\n```\nThis error prevents accidentally adding methods to functions in other modules that you only intended to use.\nThere are two ways to deal with this. You can always qualify function names with a module path:\n```julia-repl\njulia> using .NiceStuff\n\njulia> struct Cat end\n\njulia> NiceStuff.nice(::Cat) = \"nice 😸\"\n```\nAlternatively, you can `import` the specific function name:\n```julia-repl\njulia> import .NiceStuff: nice\n\njulia> struct Cat end\n\njulia> nice(::Cat) = \"nice 😸\"\nnice (generic function with 2 methods)\n```\nWhich one you choose is a matter of style. The first form makes it clear that you are adding a method to a function in another module (remember, that the imports and the method definition may be in separate files), while the second one is shorter, which is especially convenient if you are defining multiple methods.\nOnce a variable is made visible via `using` or `import`, a module may not create its own variable with the same name. Imported variables are read-only; assigning to a global variable always affects a variable owned by the current module, or else raises an error."}
{"text": "### [Renaming with as](#Renaming-with-as)\nAn identifier brought into scope by `import` or `using` can be renamed with the keyword `as`. This is useful for working around name conflicts as well as for shortening names. For example, `Base` exports the function name `read`, but the CSV.jl package also provides `CSV.read`. If we are going to invoke CSV reading many times, it would be convenient to drop the `CSV.` qualifier. But then it is ambiguous whether we are referring to `Base.read` or `CSV.read`:\n```julia-repl\njulia> read;\n\njulia> import CSV: read\nWARNING: ignoring conflicting import of CSV.read into Main\n```\nRenaming provides a solution:\n```julia-repl\njulia> import CSV: read as rd\n```\nImported packages themselves can also be renamed:\n```julia\nimport BenchmarkTools as BT\n```\n`as` works with `using` only when a single identifier is brought into scope. For example `using CSV: read as rd` works, but `using CSV as C` does not, since it operates on all of the exported names in `CSV`."}
{"text": "### [Mixing multiple using and import statements](#Mixing-multiple-using-and-import-statements)\nWhen multiple `using` or `import` statements of any of the forms above are used, their effect is combined in the order they appear. For example,\n```julia-repl\njulia> using .NiceStuff # exported names and the module name\n\njulia> import .NiceStuff: nice # allows adding methods to unqualified functions\n```\nwould bring all the exported names of `NiceStuff` and the module name itself into scope, and also allow adding methods to `nice` without prefixing it with a module name."}
{"text": "### [Handling name conflicts](#Handling-name-conflicts)\nConsider the situation where two (or more) packages export the same name, as in\n```julia-repl\njulia> module A\n export f\n f() = 1\n end\nA\njulia> module B\n export f\n f() = 2\n end\nB\n```\nThe statement `using .A, .B` works, but when you try to call `f`, you get an error with a hint\n```julia-repl\njulia> using .A, .B\n\njulia> f\nERROR: UndefVarError: `f` not defined in `Main`\nHint: It looks like two or more modules export different bindings with this name, resulting in ambiguity. Try explicitly importing it from a particular module, or qualifying the name with the module it should come from.\n```\nHere, Julia cannot decide which `f` you are referring to, so you have to make a choice. The following solutions are commonly used:\n1. Simply proceed with qualified names like `A.f` and `B.f`. This makes the context clear to the reader of your code, especially if `f` just happens to coincide but has different meaning in various packages. For example, `degree` has various uses in mathematics, the natural sciences, and in everyday life, and these meanings should be kept separate.\n2. Use the `as` keyword above to rename one or both identifiers, eg\n ```julia-repl\n julia> using .A: f as f\n\n julia> using .B: f as g\n ```\n would make `B.f` available as `g`. Here, we are assuming that you did not use `using A` before, which would have brought `f` into the namespace."}
{"text": "### [Handling name conflicts](#Handling-name-conflicts)\n3. When the names in question *do* share a meaning, it is common for one module to import it from another, or have a lightweight “base” package with the sole function of defining an interface like this, which can be used by other packages. It is conventional to have such package names end in `...Base` (which has nothing to do with Julia's `Base` module)."}
{"text": "### [Default top-level definitions and bare modules](#Default-top-level-definitions-and-bare-modules)\nModules automatically contain `using Core`, `using Base`, and definitions of the [`eval`](../../base/base/#eval) and [`include`](../../base/base/#include) functions, which evaluate expressions/files within the global scope of that module.\nIf these default definitions are not wanted, modules can be defined using the keyword [`baremodule`](../../base/base/#baremodule) instead (note: `Core` is still imported). In terms of `baremodule`, a standard `module` looks like this:\n```julia\nbaremodule Mod\n\nusing Base\n\neval(x) = Core.eval(Mod, x)\ninclude(p) = Base.include(Mod, p)\n\n...\n\nend\n```\nIf even `Core` is not wanted, a module that imports nothing and defines no names at all can be defined with `Module(:YourNameHere, false, false)` and code can be evaluated into it with [`@eval`](../../base/base/#Base.@eval) or [`Core.eval`](../../devdocs/init/#Core.eval):\n```julia-repl\njulia> arithmetic = Module(:arithmetic, false, false)\nMain.arithmetic\n\njulia> @eval arithmetic add(x, y) = $(+)(x, y)\nadd (generic function with 1 method)\n\njulia> arithmetic.add(12, 13)\n25\n```"}
{"text": "### [Standard modules](#Standard-modules)\nThere are three important standard modules:\n- [`Core`](../../base/base/#Core) contains all functionality \"built into\" the language.\n- [`Base`](../../base/base/#Base) contains basic functionality that is useful in almost all cases.\n- [`Main`](../../base/base/#Main) is the top-level module and the current module, when Julia is started.\nBy default Julia ships with some standard library modules. These behave like regular Julia packages except that you don't need to install them explicitly. For example, if you wanted to perform some unit testing, you could load the `Test` standard library as follows:\n```julia\nusing Test\n```"}
{"text": "## [Submodules and relative paths](#Submodules-and-relative-paths)\nModules can contain *submodules*, nesting the same syntax `module ... end`. They can be used to introduce separate namespaces, which can be helpful for organizing complex codebases. Note that each `module` introduces its own [scope](../variables-and-scoping/#scope-of-variables), so submodules do not automatically “inherit” names from their parent.\nIt is recommended that submodules refer to other modules within the enclosing parent module (including the latter) using *relative module qualifiers* in `using` and `import` statements. A relative module qualifier starts with a period (`.`), which corresponds to the current module, and each successive `.` leads to the parent of the current module. This should be followed by modules if necessary, and eventually the actual name to access, all separated by `.`s.\nConsider the following example, where the submodule `SubA` defines a function, which is then extended in its “sibling” module:\n```julia-repl\njulia> module ParentModule\n module SubA\n export add_D # exported interface\n const D = 3\n add_D(x) = x + D\n end\n using .SubA # brings `add_D` into the namespace\n export add_D # export it from ParentModule too\n module SubB\n import ..SubA: add_D # relative path for a “sibling” module\n struct Infinity end\n add_D(x::Infinity) = x\n end\n end;\n```\nYou may see code in packages, which, in a similar situation, uses\n```julia-repl\njulia> import .ParentModule.SubA: add_D\n```"}
{"text": "## [Submodules and relative paths](#Submodules-and-relative-paths)\nHowever, this operates through [code loading](../code-loading/#code-loading), and thus only works if `ParentModule` is in a package. It is better to use relative paths.\nNote that the order of definitions also matters if you are evaluating values. Consider\n```julia\nmodule TestPackage\n\nexport x, y\n\nx = 0\n\nmodule Sub\nusing ..TestPackage\nz = y # ERROR: UndefVarError: `y` not defined in `Main`\nend\n\ny = 1\n\nend\n```\nwhere `Sub` is trying to use `TestPackage.y` before it was defined, so it does not have a value.\nFor similar reasons, you cannot use a cyclic ordering:\n```julia\nmodule A\n\nmodule B\nusing ..C # ERROR: UndefVarError: `C` not defined in `Main.A`\nend\n\nmodule C\nusing ..B\nend\n\nend\n```"}
{"text": "## [Module initialization and precompilation](#Module-initialization-and-precompilation)\nLarge modules can take several seconds to load because executing all of the statements in a module often involves compiling a large amount of code. Julia creates precompiled caches of the module to reduce this time.\nPrecompiled module files (sometimes called \"cache files\") are created and used automatically when `import` or `using` loads a module. If the cache file(s) do not yet exist, the module will be compiled and saved for future reuse. You can also manually call [`Base.compilecache(Base.identify_package(\"modulename\"))`](../../base/base/#Base.compilecache) to create these files without loading the module. The resulting cache files will be stored in the `compiled` subfolder of `DEPOT_PATH[1]`. If nothing about your system changes, such cache files will be used when you load the module with `import` or `using`.\nPrecompilation cache files store definitions of modules, types, methods, and constants. They may also store method specializations and the code generated for them, but this typically requires that the developer add explicit [`precompile`](../../base/base/#Base.precompile) directives or execute workloads that force compilation during the package build."}
{"text": "## [Module initialization and precompilation](#Module-initialization-and-precompilation)\nHowever, if you update the module's dependencies or change its source code, the module is automatically recompiled upon `using` or `import`. Dependencies are modules it imports, the Julia build, files it includes, or explicit dependencies declared by [`include_dependency(path)`](../../base/base/#Base.include_dependency) in the module file(s).\nFor file dependencies loaded by `include`, a change is determined by examining whether the file size (`fsize`) or content (condensed into a hash) is unchanged. For file dependencies loaded by `include_dependency` a change is determined by examining whether the modification time (`mtime`) is unchanged, or equal to the modification time truncated to the nearest second (to accommodate systems that can't copy mtime with sub-second accuracy). It also takes into account whether the path to the file chosen by the search logic in `require` matches the path that had created the precompile file. It also takes into account the set of dependencies already loaded into the current process and won't recompile those modules, even if their files change or disappear, in order to avoid creating incompatibilities between the running system and the precompile cache. Finally, it takes account of changes in any [compile-time preferences](../code-loading/#preferences)."}
{"text": "## [Module initialization and precompilation](#Module-initialization-and-precompilation)\nIf you know that a module is *not* safe to precompile (for example, for one of the reasons described below), you should put `__precompile__(false)` in the module file (typically placed at the top). This will cause `Base.compilecache` to throw an error, and will cause `using` / `import` to load it directly into the current process and skip the precompile and caching. This also thereby prevents the module from being imported by any other precompiled module.\nYou may need to be aware of certain behaviors inherent in the creation of incremental shared libraries which may require care when writing your module. For example, external state is not preserved. To accommodate this, explicitly separate any initialization steps that must occur at *runtime* from steps that can occur at *compile time*. For this purpose, Julia allows you to define an `__init__()` function in your module that executes any initialization steps that must occur at runtime. This function will not be called during compilation (`--output-*`). Effectively, you can assume it will be run exactly once in the lifetime of the code. You may, of course, call it manually if necessary, but the default is to assume this function deals with computing state for the local machine, which does not need to be – or even should not be – captured in the compiled image. It will be called after the module is loaded into a process, including if it is being loaded into an incremental compile (`--output-incremental=yes`), but not if it is being loaded into a full-compilation process."}
{"text": "## [Module initialization and precompilation](#Module-initialization-and-precompilation)\nIn particular, if you define a `function __init__()` in a module, then Julia will call `__init__()` immediately *after* the module is loaded (e.g., by `import`, `using`, or `require`) at runtime for the *first* time (i.e., `__init__` is only called once, and only after all statements in the module have been executed). Because it is called after the module is fully imported, any submodules or other imported modules have their `__init__` functions called *before* the `__init__` of the enclosing module.\nTwo typical uses of `__init__` are calling runtime initialization functions of external C libraries and initializing global constants that involve pointers returned by external libraries. For example, suppose that we are calling a C library `libfoo` that requires us to call a `foo_init()` initialization function at runtime. Suppose that we also want to define a global constant `foo_data_ptr` that holds the return value of a `void *foo_data()` function defined by `libfoo` – this constant must be initialized at runtime (not at compile time) because the pointer address will change from run to run. You could accomplish this by defining the following `__init__` function in your module:\n```julia\nconst foo_data_ptr = Ref{Ptr{Cvoid}}(0)\nfunction __init__()\n ccall((:foo_init, :libfoo), Cvoid, ())\n foo_data_ptr[] = ccall((:foo_data, :libfoo), Ptr{Cvoid}, ())\n nothing\nend\n```"}
{"text": "## [Module initialization and precompilation](#Module-initialization-and-precompilation)\nNotice that it is perfectly possible to define a global inside a function like `__init__`; this is one of the advantages of using a dynamic language. But by making it a constant at global scope, we can ensure that the type is known to the compiler and allow it to generate better optimized code. Obviously, any other globals in your module that depends on `foo_data_ptr` would also have to be initialized in `__init__`.\nConstants involving most Julia objects that are not produced by [`ccall`](../../base/c/#ccall) do not need to be placed in `__init__`: their definitions can be precompiled and loaded from the cached module image. This includes complicated heap-allocated objects like arrays. However, any routine that returns a raw pointer value must be called at runtime for precompilation to work ([`Ptr`](../../base/c/#Core.Ptr) objects will turn into null pointers unless they are hidden inside an [`isbits`](../../base/base/#Base.isbits) object). This includes the return values of the Julia functions [`@cfunction`](../../base/c/#Base.@cfunction) and [`pointer`](../../base/c/#Base.pointer)."}
{"text": "## [Module initialization and precompilation](#Module-initialization-and-precompilation)\nDictionary and set types, or in general anything that depends on the output of a `hash(key)` method, are a trickier case. In the common case where the keys are numbers, strings, symbols, ranges, `Expr`, or compositions of these types (via arrays, tuples, sets, pairs, etc.) they are safe to precompile. However, for a few other key types, such as `Function` or `DataType` and generic user-defined types where you haven't defined a `hash` method, the fallback `hash` method depends on the memory address of the object (via its `objectid`) and hence may change from run to run. If you have one of these key types, or if you aren't sure, to be safe you can initialize this dictionary from within your `__init__` function. Alternatively, you can use the [`IdDict`](../../base/collections/#Base.IdDict) dictionary type, which is specially handled by precompilation so that it is safe to initialize at compile-time.\nWhen using precompilation, it is important to keep a clear sense of the distinction between the compilation phase and the execution phase. In this mode, it will often be much more clearly apparent that Julia is a compiler which allows execution of arbitrary Julia code, not a standalone interpreter that also generates compiled code.\nOther known potential failure scenarios include:\n1. Global counters (for example, for attempting to uniquely identify objects). Consider the following code snippet:"}
{"text": "## [Module initialization and precompilation](#Module-initialization-and-precompilation)\n ```julia\n mutable struct UniquedById\n myid::Int\n let counter = 0\n UniquedById() = new(counter += 1)\n end\n end\n ```\n while the intent of this code was to give every instance a unique id, the counter value is recorded at the end of compilation. All subsequent usages of this incrementally compiled module will start from that same counter value.\n Note that `objectid` (which works by hashing the memory pointer) has similar issues (see notes on `Dict` usage below).\n One alternative is to use a macro to capture [`@__MODULE__`](../../base/base/#Base.@__MODULE__) and store it alone with the current `counter` value, however, it may be better to redesign the code to not depend on this global state.\n2. Associative collections (such as `Dict` and `Set`) need to be re-hashed in `__init__`. (In the future, a mechanism may be provided to register an initializer function.)\n3. Depending on compile-time side-effects persisting through load-time. Example include: modifying arrays or other variables in other Julia modules; maintaining handles to open files or devices; storing pointers to other system resources (including memory);\n4. Creating accidental \"copies\" of global state from another module, by referencing it directly instead of via its lookup path. For example, (in global scope):"}
{"text": "## [Module initialization and precompilation](#Module-initialization-and-precompilation)\n ```julia\n #mystdout = Base.stdout #= will not work correctly, since this will copy Base.stdout into this module =#\n # instead use accessor functions:\n getstdout() = Base.stdout #= best option =#\n # or move the assignment into the runtime:\n __init__() = global mystdout = Base.stdout #= also works =#\n ```\nSeveral additional restrictions are placed on the operations that can be done while precompiling code to help the user avoid other wrong-behavior situations:\n1. Calling [`eval`](../../base/base/#eval) to cause a side-effect in another module. This will also cause a warning to be emitted when the incremental precompile flag is set.\n2. `global const` statements from local scope after `__init__()` has been started (see issue #12010 for plans to add an error for this)\n3. Replacing a module is a runtime error while doing an incremental precompile.\nA few other points to be aware of:\n1. No code reload / cache invalidation is performed after changes are made to the source files themselves, (including by `Pkg.update`), and no cleanup is done after `Pkg.rm`\n2. The memory sharing behavior of a reshaped array is disregarded by precompilation (each view gets its own copy)"}
{"text": "## [Module initialization and precompilation](#Module-initialization-and-precompilation)\n3. Expecting the filesystem to be unchanged between compile-time and runtime e.g. [`@__FILE__`](../../base/base/#Base.@__FILE__)/`source_path()` to find resources at runtime, or the BinDeps `@checked_lib` macro. Sometimes this is unavoidable. However, when possible, it can be good practice to copy resources into the module at compile-time so they won't need to be found at runtime.\n4. `WeakRef` objects and finalizers are not currently handled properly by the serializer (this will be fixed in an upcoming release).\n5. It is usually best to avoid capturing references to instances of internal metadata objects such as `Method`, `MethodInstance`, `MethodTable`, `TypeMapLevel`, `TypeMapEntry` and fields of those objects, as this can confuse the serializer and may not lead to the outcome you desire. It is not necessarily an error to do this, but you simply need to be prepared that the system will try to copy some of these and to create a single unique instance of others."}
{"text": "## [Module initialization and precompilation](#Module-initialization-and-precompilation)\nIt is sometimes helpful during module development to turn off incremental precompilation. The command line flag `--compiled-modules={yes|no|existing}` enables you to toggle module precompilation on and off. When Julia is started with `--compiled-modules=no` the serialized modules in the compile cache are ignored when loading modules and module dependencies. In some cases, you may want to load existing precompiled modules, but not create new ones. This can be done by starting Julia with `--compiled-modules=existing`. More fine-grained control is available with `--pkgimages={yes|no|existing}`, which only affects native-code storage during precompilation. `Base.compilecache` can still be called manually. The state of this command line flag is passed to `Pkg.build` to disable automatic precompilation triggering when installing, updating, and explicitly building packages.\nYou can also debug some precompilation failures with environment variables. Setting `JULIA_VERBOSE_LINKING=true` may help resolve failures in linking shared libraries of compiled native code. See the **Developer Documentation** part of the Julia manual, where you will find further details in the section documenting Julia's internals under \"Package Images\".\n------------------------------------------------------------------------"}
{"text": "# Documentation · The Julia Language\nSource: https://docs.julialang.org/en/v1/manual/documentation/"}
{"text": "# [Documentation](#man-documentation)"}
{"text": "## [Accessing Documentation](#Accessing-Documentation)\nDocumentation can be accessed at the REPL or in [IJulia](https://github.com/JuliaLang/IJulia.jl) by typing `?` followed by the name of a function or macro, and pressing `Enter`. For example,\n```julia\n?cos\n?@time\n?r\"\"\n```\nwill show documentation for the relevant function, macro or string macro respectively. Most Julia environments provide a way to access documentation directly:\n- [VS Code](https://www.julia-vscode.org/) shows documentation when you hover over a function name. You can also use the Julia panel in the sidebar to search for documentation.\n- In [Pluto](https://github.com/fonsp/Pluto.jl), open the \"Live Docs\" panel on the bottom right.\n- In [Juno](https://junolab.org) using `Ctrl-J, Ctrl-D` will show the documentation for the object under the cursor.\n`Docs.hasdoc(module, name)::Bool` tells whether a name has a docstring. `Docs.undocumented_names(module; all)` returns the undocumented names in a module."}
{"text": "## [Writing Documentation](#Writing-Documentation)\nJulia enables package developers and users to document functions, types and other objects easily via a built-in documentation system.\nThe basic syntax is simple: any string appearing just before an object (function, macro, type or instance) will be interpreted as documenting it (these are called *docstrings*). Note that no blank lines or comments may intervene between a docstring and the documented object. Here is a basic example:\n```julia\n\"Tell whether there are too foo items in the array.\"\nfoo(xs::Array) = ...\n```\nDocumentation is interpreted as [Markdown](https://en.wikipedia.org/wiki/Markdown), so you can use indentation and code fences to delimit code examples from text. Technically, any object can be associated with any other as metadata; Markdown happens to be the default, but one can construct other string macros and pass them to the `@doc` macro just as well.\nMarkdown support is implemented in the `Markdown` standard library and for a full list of supported syntax see the [documentation](../../stdlib/Markdown/#markdown_stdlib).\nHere is a more complex example, still using Markdown:\n````julia\n\"\"\"\n bar(x[, y])\n\nCompute the Bar index between `x` and `y`.\n\nIf `y` is unspecified, compute the Bar index between all pairs of columns of `x`.\n\n# Examples\n```julia-repl\njulia> bar([1, 2], [1, 2])\n1\n```\n\"\"\"\nfunction bar(x, y) ...\n````\nAs in the example above, we recommend following some simple conventions when writing documentation:"}
{"text": "## [Writing Documentation](#Writing-Documentation)\n1. Always show the signature of a function at the top of the documentation, with a four-space indent so that it is printed as Julia code.\n This can be identical to the signature present in the Julia code (like `mean(x::AbstractArray)`), or a simplified form. Optional arguments should be represented with their default values (i.e. `f(x, y=1)`) when possible, following the actual Julia syntax. Optional arguments which do not have a default value should be put in brackets (i.e. `f(x[, y])` and `f(x[, y[, z]])`). An alternative solution is to use several lines: one without optional arguments, the other(s) with them. This solution can also be used to document several related methods of a given function. When a function accepts many keyword arguments, only include a `