Skip to main content

prompt

prompt [prompt]

Returns the response to a prompt sent to a GPT.

arguments:​

prompt​

The prompt to provide to the GPT. (type: string)

flags:​

--appendStage​

Used to append the results from a previous stage to the current stage. (provide a label, stage index, or boolean true to append the previous results)

--cache​

A boolean value of true/false that determines whether or not to use the cache. Generally most commands will default to true.

--checkpoint​

Format: "{CHECKPOINT NAME}:{COLUMN}" Used to store the value of the provided column (in the first row of results) in the provided name for use as a checkpoint in scheduled queries or other stages. Not encrypted. Can be accessed using $CHECKPOINTS.{CHECKPOINT NAME}$

--credential​

An OpenAI credential to override the default one named 'openai' required by this command.

--crul​

When true, will return a crul query for the provided prompt. Will require tweaking but can be a good start to construct queries.

--curl​

When true, will return a crul query using the curl comand for the provided prompt. Will require tweaking but can be a good start to construct queries.

--enrich​

Enriches each result row with the previous row. The previous columns will be appended with a _previous.

--filter​

A filter to run on the command results before completing the command. If not provided, no filter is run on the results.

--fresh​

Starts the stage as if it was a fresh query, so will not use any previous result.

--guid​

Adds a populated random guid column.

--json​

When true, will return a JSON object for the provided prompt. Will require tweaking but can be a good start to construct queries.

--labelStage​

Used to label a stage with a user provided label.

--maxConcurrent​

Override for the system max concurrent workers for a stage.

--prompt.frequency_penalty​

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

--prompt.logint_bias​

Modify the likelihood of specified tokens appearing in the completion.

--prompt.model​

ID of the model to use.

--prompt.presence_penalty​

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

--prompt.temperature​

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

--prompt.top_p​

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

--randomizeHash​

Randomizes the stage hash, even if args and flags are the same.

--stats​

Controls if a stats calculation is run on a stage after it completes.

--table​

A comma separated list of columns to include in the command results. If not provided, all columns will be included.

--type​

Each command has a default type, either "mapping" or "reducing". Some commands can operate as either, when "reducing" they will operate on all rows at once, when "mapping", they will operate on one row at a time.

--variable​

Format: "{VARIABLE NAME}:{COLUMN}" Used to store the value of the provided column (in the first row of results) in the provided name for use as a variable in other stages. Can be accessed using $VARIABLES.{VARIABLE NAME}$. Stored as an encrypted secret. Not stored across queries.

--while​

Will rerun the stage until the provided expression is valid for the first line of results.

support​

AMI_ENTERPRISE AMI_FREE AMI_PRO BINARY_ENTERPRISE BINARY_FREE BINARY_PRO DESKTOP_ENTERPRISE DESKTOP_FREE DESKTOP_PRO DOCKER_ENTERPRISE DOCKER_FREE DOCKER_PRO