aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorflu0r1ne <flu0r1ne@flu0r1ne.net>2023-05-06 05:52:00 -0500
committerflu0r1ne <flu0r1ne@flu0r1ne.net>2023-05-06 05:52:00 -0500
commit42f7e201af301696066a792832e3b47c5597a6fc (patch)
treefa0d812f1854a36589ae25c2b02a4e9805511442 /README.md
parent603ebf9a866314b3304f800d50c09a3cd55d8546 (diff)
downloadgpt-chat-cli-42f7e201af301696066a792832e3b47c5597a6fc.tar.xz
gpt-chat-cli-42f7e201af301696066a792832e3b47c5597a6fc.zip
Update readme / bump version
Diffstat (limited to 'README.md')
-rw-r--r--README.md25
1 files changed, 16 insertions, 9 deletions
diff --git a/README.md b/README.md
index 83e3a0d..653c2dc 100644
--- a/README.md
+++ b/README.md
@@ -3,7 +3,7 @@ gpt-chat-cli
> This is a WIP. Please consider it experimental.
-A simple ChatGPT CLI.
+A simple yet powerful ChatGPT CLI.
### Installation:
@@ -76,12 +76,12 @@ Note that this implementation uses the `powi()` method to compute the squared di
### Usage:
```
-usage: gpt-chat-cli [-h] [-m MODEL] [-t TEMPERATURE] [-f FREQUENCY_PENALTY] [-p PRESENCE_PENALTY] [-k MAX_TOKENS] [-s TOP_P] [-n N_COMPLETIONS]
- [--adornments {AutoDetectedOption.ON,AutoDetectedOption.OFF,AutoDetectedOption.AUTO}] [--color {AutoDetectedOption.ON,AutoDetectedOption.OFF,AutoDetectedOption.AUTO}]
- message
+usage: gcli.py [-h] [-m MODEL] [-t TEMPERATURE] [-f FREQUENCY_PENALTY] [-p PRESENCE_PENALTY] [-k MAX_TOKENS] [-s TOP_P] [-n N_COMPLETIONS] [--adornments {on,off,auto}] [--color {on,off,auto}] [--version] [-l]
+ [-i]
+ [message]
positional arguments:
- message The contents of the message. When used in chat mode, this is the initial message if provided.
+ message The contents of the message. When in a interactive session, this is the initial prompt provided.
options:
-h, --help show this help message and exit
@@ -100,10 +100,13 @@ options:
comprising the top 10% probability mass are considered.
-n N_COMPLETIONS, --n-completions N_COMPLETIONS
How many chat completion choices to generate for each input message.
- --adornments {AutoDetectedOption.ON,AutoDetectedOption.OFF,AutoDetectedOption.AUTO}
+ --adornments {on,off,auto}
Show adornments to indicate the model and response. Can be set to 'on', 'off', or 'auto'.
- --color {AutoDetectedOption.ON,AutoDetectedOption.OFF,AutoDetectedOption.AUTO}
+ --color {on,off,auto}
Set color to 'on', 'off', or 'auto'.
+ --version Print version and exit
+ -l, --list-models List models and exit
+ -i, --interactive Start an interactive session
```
### Features:
@@ -112,5 +115,9 @@ options:
- [x] Color and adornments
- [x] Support for multiple completions
- [x] Support for any model which is supported through the chat completions API. [See model endpoint compatibility.](https://platform.openai.com/docs/models/model-endpoint-compatibility)
-- [x] Capability to modify parameters including temperature, frequency penalty, presence penalty, top p, and the maximum number of tokens emitted.
-
+- [x] Capability to modify parameters including temperature, frequency penalty, presence penalty, top p, and the maximum number of tokens emitted
+- [x] Interactive sessions
+- [x] Code syntax highlighting (when the LLM labels code blocks)
+- [x] Interactive mode with emacs-style editing
+- [x] Input from standard in, pipes, and heredoc
+- [x] List available models