Skip to content

Auto-generated code for 8.19 #2878

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: 8.19
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
111 changes: 76 additions & 35 deletions docs/reference.asciidoc

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion src/api/api/cluster.ts
Original file line number Diff line number Diff line change
Expand Up @@ -469,7 +469,7 @@ export default class Cluster {
}

/**
* Get remote cluster information. Get information about configured remote clusters. The API returns connection and endpoint information keyed by the configured remote cluster alias. > info > This API returns information that reflects current state on the local cluster. > The `connected` field does not necessarily reflect whether a remote cluster is down or unavailable, only whether there is currently an open connection to it. > Elasticsearch does not spontaneously try to reconnect to a disconnected remote cluster. > To trigger a reconnection, attempt a cross-cluster search, ES|QL cross-cluster search, or try the [resolve cluster endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-resolve-cluster).
* Get remote cluster information. Get information about configured remote clusters. The API returns connection and endpoint information keyed by the configured remote cluster alias. > info > This API returns information that reflects current state on the local cluster. > The `connected` field does not necessarily reflect whether a remote cluster is down or unavailable, only whether there is currently an open connection to it. > Elasticsearch does not spontaneously try to reconnect to a disconnected remote cluster. > To trigger a reconnection, attempt a cross-cluster search, ES|QL cross-cluster search, or try the `/_resolve/cluster` endpoint.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.19/cluster-remote-info.html | Elasticsearch API documentation}
*/
async remoteInfo (this: That, params?: T.ClusterRemoteInfoRequest | TB.ClusterRemoteInfoRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.ClusterRemoteInfoResponse>
Expand Down
2 changes: 1 addition & 1 deletion src/api/api/esql.ts
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ export default class Esql {
async asyncQuery (this: That, params: T.EsqlAsyncQueryRequest | TB.EsqlAsyncQueryRequest, options?: TransportRequestOptions): Promise<T.EsqlAsyncQueryResponse>
async asyncQuery (this: That, params: T.EsqlAsyncQueryRequest | TB.EsqlAsyncQueryRequest, options?: TransportRequestOptions): Promise<any> {
const acceptedPath: string[] = []
const acceptedBody: string[] = ['columnar', 'filter', 'locale', 'params', 'profile', 'query', 'tables', 'include_ccs_metadata', 'wait_for_completion_timeout']
const acceptedBody: string[] = ['columnar', 'filter', 'locale', 'params', 'profile', 'query', 'tables', 'include_ccs_metadata', 'wait_for_completion_timeout', 'keep_alive', 'keep_on_completion']
const querystring: Record<string, any> = {}
// @ts-expect-error
const userBody: any = params?.body
Expand Down
2 changes: 1 addition & 1 deletion src/api/api/fleet.ts
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ export default class Fleet {
}

/**
* Executes several [fleet searches](https://www.elastic.co/guide/en/elasticsearch/reference/current/fleet-search.html) with a single API request. The API follows the same structure as the [multi search](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-multi-search.html) API. However, similar to the fleet search API, it supports the wait_for_checkpoints parameter.
* Executes several fleet searches with a single API request. The API follows the same structure as the multi search (`_msearch`) API. However, similar to the fleet search API, it supports the `wait_for_checkpoints` parameter.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.19/fleet-multi-search.html | Elasticsearch API documentation}
*/
async msearch<TDocument = unknown> (this: That, params: T.FleetMsearchRequest | TB.FleetMsearchRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.FleetMsearchResponse<TDocument>>
Expand Down
2 changes: 1 addition & 1 deletion src/api/api/indices.ts
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ export default class Indices {

/**
* Get tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens. Generating excessive amount of tokens may cause a node to run out of memory. The `index.analyze.max_token_count` setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The `_analyze` endpoint without a specified index will always use `10000` as its limit.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.19/indices-analyze.html | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v8/operation/operation-indices-analyze | Elasticsearch API documentation}
*/
async analyze (this: That, params?: T.IndicesAnalyzeRequest | TB.IndicesAnalyzeRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.IndicesAnalyzeResponse>
async analyze (this: That, params?: T.IndicesAnalyzeRequest | TB.IndicesAnalyzeRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.IndicesAnalyzeResponse, unknown>>
Expand Down
12 changes: 6 additions & 6 deletions src/api/api/inference.ts
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ export default class Inference {
}

/**
* Perform chat completion inference The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the `chat_completion` task type for `openai` and `elastic` inference services. NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. If you use the `openai` service or the `elastic` service, use the Chat completion inference API.
* Perform chat completion inference The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the `chat_completion` task type for `openai` and `elastic` inference services. NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. If you use the `openai`, `hugging_face` or the `elastic` service, use the Chat completion inference API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.19/chat-completion-inference-api.html | Elasticsearch API documentation}
*/
async chatCompletionUnified (this: That, params: T.InferenceChatCompletionUnifiedRequest | TB.InferenceChatCompletionUnifiedRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferenceChatCompletionUnifiedResponse>
Expand Down Expand Up @@ -262,7 +262,7 @@ export default class Inference {
}

/**
* Create an inference endpoint. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
* Create an inference endpoint. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. The following integrations are available through the inference API. You can find the available task types next to the integration name: * AlibabaCloud AI Search (`completion`, `rerank`, `sparse_embedding`, `text_embedding`) * Amazon Bedrock (`completion`, `text_embedding`) * Anthropic (`completion`) * Azure AI Studio (`completion`, `text_embedding`) * Azure OpenAI (`completion`, `text_embedding`) * Cohere (`completion`, `rerank`, `text_embedding`) * Elasticsearch (`rerank`, `sparse_embedding`, `text_embedding` - this service is for built-in models and models uploaded through Eland) * ELSER (`sparse_embedding`) * Google AI Studio (`completion`, `text_embedding`) * Google Vertex AI (`rerank`, `text_embedding`) * Hugging Face (`chat_completion`, `completion`, `rerank`, `text_embedding`) * Mistral (`chat_completion`, `completion`, `text_embedding`) * OpenAI (`chat_completion`, `completion`, `text_embedding`) * VoyageAI (`text_embedding`, `rerank`) * Watsonx inference integration (`text_embedding`) * JinaAI (`text_embedding`, `rerank`)
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.19/put-inference-api.html | Elasticsearch API documentation}
*/
async put (this: That, params: T.InferencePutRequest | TB.InferencePutRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferencePutResponse>
Expand Down Expand Up @@ -352,7 +352,7 @@ export default class Inference {
}

/**
* Create an Amazon Bedrock inference endpoint. Creates an inference endpoint to perform an inference task with the `amazonbedrock` service. >info > You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.
* Create an Amazon Bedrock inference endpoint. Create an inference endpoint to perform an inference task with the `amazonbedrock` service. >info > You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.19/infer-service-amazon-bedrock.html | Elasticsearch API documentation}
*/
async putAmazonbedrock (this: That, params: T.InferencePutAmazonbedrockRequest | TB.InferencePutAmazonbedrockRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferencePutAmazonbedrockResponse>
Expand Down Expand Up @@ -757,15 +757,15 @@ export default class Inference {
}

/**
* Create a Hugging Face inference endpoint. Create an inference endpoint to perform an inference task with the `hugging_face` service. You must first create an inference endpoint on the Hugging Face endpoint page to get an endpoint URL. Select the model you want to use on the new endpoint creation page (for example `intfloat/e5-small-v2`), then select the sentence embeddings task under the advanced configuration section. Create the endpoint and copy the URL after the endpoint initialization has been finished. The following models are recommended for the Hugging Face service: * `all-MiniLM-L6-v2` * `all-MiniLM-L12-v2` * `all-mpnet-base-v2` * `e5-base-v2` * `e5-small-v2` * `multilingual-e5-base` * `multilingual-e5-small`
* Create a Hugging Face inference endpoint. Create an inference endpoint to perform an inference task with the `hugging_face` service. Supported tasks include: `text_embedding`, `completion`, and `chat_completion`. To configure the endpoint, first visit the Hugging Face Inference Endpoints page and create a new endpoint. Select a model that supports the task you intend to use. For Elastic's `text_embedding` task: The selected model must support the `Sentence Embeddings` task. On the new endpoint creation page, select the `Sentence Embeddings` task under the `Advanced Configuration` section. After the endpoint has initialized, copy the generated endpoint URL. Recommended models for `text_embedding` task: * `all-MiniLM-L6-v2` * `all-MiniLM-L12-v2` * `all-mpnet-base-v2` * `e5-base-v2` * `e5-small-v2` * `multilingual-e5-base` * `multilingual-e5-small` For Elastic's `chat_completion` and `completion` tasks: The selected model must support the `Text Generation` task and expose OpenAI API. HuggingFace supports both serverless and dedicated endpoints for `Text Generation`. When creating dedicated endpoint select the `Text Generation` task. After the endpoint is initialized (for dedicated) or ready (for serverless), ensure it supports the OpenAI API and includes `/v1/chat/completions` part in URL. Then, copy the full endpoint URL for use. Recommended models for `chat_completion` and `completion` tasks: * `Mistral-7B-Instruct-v0.2` * `QwQ-32B` * `Phi-3-mini-128k-instruct` For Elastic's `rerank` task: The selected model must support the `sentence-ranking` task and expose OpenAI API. HuggingFace supports only dedicated (not serverless) endpoints for `Rerank` so far. After the endpoint is initialized, copy the full endpoint URL for use. Tested models for `rerank` task: * `bge-reranker-base` * `jina-reranker-v1-turbo-en-GGUF`
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.19/infer-service-hugging-face.html | Elasticsearch API documentation}
*/
async putHuggingFace (this: That, params: T.InferencePutHuggingFaceRequest | TB.InferencePutHuggingFaceRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferencePutHuggingFaceResponse>
async putHuggingFace (this: That, params: T.InferencePutHuggingFaceRequest | TB.InferencePutHuggingFaceRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.InferencePutHuggingFaceResponse, unknown>>
async putHuggingFace (this: That, params: T.InferencePutHuggingFaceRequest | TB.InferencePutHuggingFaceRequest, options?: TransportRequestOptions): Promise<T.InferencePutHuggingFaceResponse>
async putHuggingFace (this: That, params: T.InferencePutHuggingFaceRequest | TB.InferencePutHuggingFaceRequest, options?: TransportRequestOptions): Promise<any> {
const acceptedPath: string[] = ['task_type', 'huggingface_inference_id']
const acceptedBody: string[] = ['chunking_settings', 'service', 'service_settings']
const acceptedBody: string[] = ['chunking_settings', 'service', 'service_settings', 'task_settings']
const querystring: Record<string, any> = {}
// @ts-expect-error
const userBody: any = params?.body
Expand Down Expand Up @@ -847,7 +847,7 @@ export default class Inference {
}

/**
* Create a Mistral inference endpoint. Creates an inference endpoint to perform an inference task with the `mistral` service.
* Create a Mistral inference endpoint. Create an inference endpoint to perform an inference task with the `mistral` service.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.19/infer-service-mistral.html | Elasticsearch API documentation}
*/
async putMistral (this: That, params: T.InferencePutMistralRequest | TB.InferencePutMistralRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferencePutMistralResponse>
Expand Down
17 changes: 13 additions & 4 deletions src/api/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1734,6 +1734,7 @@ export type SearchSourceConfig = boolean | SearchSourceFilter | Fields
export type SearchSourceConfigParam = boolean | Fields

export interface SearchSourceFilter {
exclude_vectors?: boolean
excludes?: Fields
exclude?: Fields
includes?: Fields
Expand Down Expand Up @@ -10627,8 +10628,6 @@ export interface EsqlAsyncQueryRequest extends RequestBase {
delimiter?: string
drop_null_columns?: boolean
format?: EsqlQueryEsqlFormat
keep_alive?: Duration
keep_on_completion?: boolean
columnar?: boolean
filter?: QueryDslQueryContainer
locale?: string
Expand All @@ -10638,6 +10637,8 @@ export interface EsqlAsyncQueryRequest extends RequestBase {
tables?: Record<string, Record<string, EsqlTableValuesContainer>>
include_ccs_metadata?: boolean
wait_for_completion_timeout?: Duration
keep_alive?: Duration
keep_on_completion?: boolean
}

export type EsqlAsyncQueryResponse = EsqlResult
Expand Down Expand Up @@ -11019,6 +11020,7 @@ export interface IlmExplainLifecycleLifecycleExplainManaged {
step_time_millis?: EpochTime<UnitMillis>
phase_execution?: IlmExplainLifecycleLifecycleExplainPhaseExecution
time_since_index_creation?: Duration
skip: boolean
}

export interface IlmExplainLifecycleLifecycleExplainPhaseExecution {
Expand Down Expand Up @@ -13268,11 +13270,17 @@ export interface InferenceHuggingFaceServiceSettings {
api_key: string
rate_limit?: InferenceRateLimitSetting
url: string
model_id?: string
}

export type InferenceHuggingFaceServiceType = 'hugging_face'

export type InferenceHuggingFaceTaskType = 'text_embedding'
export interface InferenceHuggingFaceTaskSettings {
return_documents?: boolean
top_n?: integer
}

export type InferenceHuggingFaceTaskType = 'chat_completion' | 'completion' | 'rerank' | 'text_embedding'

export interface InferenceInferenceChunkingSettings {
max_chunk_size?: integer
Expand Down Expand Up @@ -13351,7 +13359,7 @@ export interface InferenceMistralServiceSettings {

export type InferenceMistralServiceType = 'mistral'

export type InferenceMistralTaskType = 'text_embedding'
export type InferenceMistralTaskType = 'text_embedding' | 'completion' | 'chat_completion'

export interface InferenceOpenAIServiceSettings {
api_key: string
Expand Down Expand Up @@ -13639,6 +13647,7 @@ export interface InferencePutHuggingFaceRequest extends RequestBase {
chunking_settings?: InferenceInferenceChunkingSettings
service: InferenceHuggingFaceServiceType
service_settings: InferenceHuggingFaceServiceSettings
task_settings?: InferenceHuggingFaceTaskSettings
}

export type InferencePutHuggingFaceResponse = InferenceInferenceEndpointInfo
Expand Down
Loading
Loading