Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
h2ogpt-0.2.0-py3-none-any.whl | 2024-03-02 | 450.4 kB | |
h2oGPT 0.2.0 Release source code.tar.gz | 2024-03-02 | 30.1 MB | |
h2oGPT 0.2.0 Release source code.zip | 2024-03-02 | 30.2 MB | |
README.md | 2024-03-02 | 51.0 kB | |
Totals: 4 Items | 60.8 MB | 0 |
Official Release for h2oGPT 0.2.0
What's Changed
- Add code to push spaces chatbot by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/46
- Fixes [#48] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/55
- More HF spaces restrictions to prevent OOM or no-good choices being chosen by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/57
- Add max_beams to client_test.py by @lo5 in https://github.com/h2oai/h2ogpt/pull/64
- Fix directory name from h2o-llm to h2ogpt on install tutorial by @cpatrickalves in https://github.com/h2oai/h2ogpt/pull/63
- h2o theme for background by @jefffohl in https://github.com/h2oai/h2ogpt/pull/68
- Add option to save prompt and response as .json. by @arnocandel in https://github.com/h2oai/h2ogpt/pull/69
- Update tos.md by @eltociear in https://github.com/h2oai/h2ogpt/pull/70
- Use SAVE_DIR and --save_dir instead of SAVE_PATH and --save_path. by @arnocandel in https://github.com/h2oai/h2ogpt/pull/71
- Make chat optional from UI/client by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/74
- Compare models by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/42
- H2O gradio theme by @jefffohl in https://github.com/h2oai/h2ogpt/pull/84
- Refactor gradio into separate file and isolate it from torch specific stuff by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/85
- Refactor finetune so some of it can be used to check data and its tokenization by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/93
- Llama flash attn by @arnocandel in https://github.com/h2oai/h2ogpt/pull/86
- Give default context to help chatbot by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/100
- CUDA mismatch work-around for no gradio case by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/101
- Add Triton deployment template. by @arnocandel in https://github.com/h2oai/h2ogpt/pull/91
- Check data for unhelpful responses by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/103
- Clear torch cache memory every 20s by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/90
- Try transformers experimental streaming. Still uses threads, so probably won't fix browser exit GPU memory issue by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/98
- Handle thread stream generate exceptions. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/110
- Specify chat separator by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/114
- [DOCS] README typo fix and readability improvements by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/118
- Support OpenAssistant models in basic form, including 30B xor one by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/119
- Add stopping condition to pipeline case by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/120
- Allow auth control from CLI by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/123
- Improve data prep by @arnocandel in https://github.com/h2oai/h2ogpt/pull/122
- [DOCS] Grammar / readability improvements for FAQ.md by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/124
- neox Flash attn by @arnocandel in https://github.com/h2oai/h2ogpt/pull/31
- Langchain integration by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/111
- Allow CLI add to db and clean-up handling of evaluate args by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/137
- Add zip upload and parallel doc processing by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/138
- Control visibility of buttons, but still gradio issues mean can't spin/block button while processing in background by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/140
- Add URL support by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/142
- HTML, DOCX, and better markdown support by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/143
- odt, pptx, epub, UI text paste, eml support (both text/html and text/plain) and refactor so glob simpler by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/144
- Reform chatbot client API code by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/117
- Add import control check to avoid leaking optional langchain stuff into generate/gradio. Add test by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/146
- [DevOps] Snyk Integration by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/131
- Add image support and show sources after upload by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/147
- Update finetune.py by @orellavie1212 in https://github.com/h2oai/h2ogpt/pull/132
- ArXiv support via URL in chatbot UI by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/152
- Improve caption, include blip2 as option by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/153
- Control chats, save, export, import and otherwise manage by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/156
- Mac/Windows install and GPT4All as base model for pure CPU mode support by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/157
- Move loaders out of finetune, which is only for training, while loader used for generation too by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/161
- Allow selection of subset of docs in collection for query by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/163
- Improve datasource layout by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/164
- Refactor run_qa_db a bit, so can do other tasks by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/167
- Use latest peft/transformers/accelerate/bitsandbytes for 4-bit (qlora) by @arnocandel in https://github.com/h2oai/h2ogpt/pull/166
- Refactor out run_eval out of generate.py by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/173
- Add CLI mode with tests by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/174
- Separate out FAISS from requirements by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/184
- Generalize h2oai_pipeline so works for any instruct model we have prompt_type for, so run_db_qa will stream and stop just like non-db code path by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/190
- Ensure can use offline by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/191
- Fix and test llamacpp by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/197
- Improve use of ctx vs. max_new_tokens for non-HF models, and if no docs, don't insert == since no docs, just confuses model by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/199
- UI help in FAQ by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/205
- Quantized model updates, switch to recommending TheBloke by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/208
- Fix nochat API by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/209
- Move docs and optional reqs to directories by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/214
- Allow for custom eval json file by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/227
- Fix run_eval and validate parameters are all passed by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/228
- Add setup.py wheel building option by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/229
- [DevOps] Fix condition issue for snyk test & snyk monitor by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/169
- Add weaviate support by @hsm207 in https://github.com/h2oai/h2ogpt/pull/218
- More weaviate tests by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/231
- Allow add to db when loading from generate by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/212
- Allow update db from UI if files changed, since normally not constantly checking for new files by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/232
- More control over max_max_new_tokens and memory behavior from generate args by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/234
- Make API easier, and add prompt_dict for custom control over prompt as example of new API parameter don't need to pass by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/238
- Chunk improve by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/239
- Fix
TypeError: can only concatenate str (not "list") to str
on startup by @this in https://github.com/h2oai/h2ogpt/pull/242 - Fix nochat in UI so enter works to submit again, and if langchain mode used then shows HTML links for sources by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/244
- Improve subset words and code by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/245
- use instructor embedding, and add migration of embeddings if ever changes, at least for chroma by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/247
- Add extra clear torch cache calls so embedding on GPU doesn't stick to GPU by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/252
- Fixes [#249] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/255
- Support connecting to a local weaviate instance by @hsm207 in https://github.com/h2oai/h2ogpt/pull/236
- .gitignore updated for .idea and venv by @fazpu in https://github.com/h2oai/h2ogpt/pull/256
- move enums and add test for export copy since keep changing what files have what structures by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/260
- Ensure generate hyperparameters are passed through to h2oai_pipelinepy for generation by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/265
- Submit button is now primary + more spacing between prompt area and action buttons by @fazpu in https://github.com/h2oai/h2ogpt/pull/261
- input prompt - primary color border added + change in label text by @fazpu in https://github.com/h2oai/h2ogpt/pull/259
- prompt form moved to a separate file by @fazpu in https://github.com/h2oai/h2ogpt/pull/258
- Upgrade gradio by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/269
- Fixes [#270] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/272
- A couple of small updates to the documentation by @3x0dv5 in https://github.com/h2oai/h2ogpt/pull/274
- Add documentation on how to connect to weaviate by @hsm207 in https://github.com/h2oai/h2ogpt/pull/267
- Update Weaviate and FAISS a bit to be closer to Chroma in h2oGPT with limitations. Add testing. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/275
- Escape so it outputs
$LD_LIBRARY_PATH:/usr/local/cuda/lib64/
by @3x0dv5 in https://github.com/h2oai/h2ogpt/pull/276 - Add h2oGPT Client by @this in https://github.com/h2oai/h2ogpt/pull/133
- Update requirements and add code to get latest versions by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/281
- Pass actual eos id to generate, else doesn't know how to stop early if using non-standard eos id (normally=0, falcon GM was 11) by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/286
- Fixes [#291] -- make user_path if doesn't exist but passed, and move gradio temp file to user_path if passed to generate. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/292
- Add QUIP et al. metrics for context-Q-A testing by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/293
- Add ci support + wheel by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/243
- Option to fill-up context if top_k_docs=-1 by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/294
- Update README.md by @arnocandel in https://github.com/h2oai/h2ogpt/pull/296
- Fixes [#279] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/297
- Update README.md by @eltociear in https://github.com/h2oai/h2ogpt/pull/301
- Add support for text-generation-server, gradio inference server, OpenAI inference server. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/295
- Use latest peft, to fix export failure. by @arnocandel in https://github.com/h2oai/h2ogpt/pull/310
- Model N by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/313
- Control maxtime for TGI by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/322
- Fix prompting by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/327
- Protect evaluate against bad inputs by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/326
- Add explicit cast to bool for visible=kwarg[...] by @parkeraddison in https://github.com/h2oai/h2ogpt/pull/328
- For multi-model ChatAll view, save all models together so can recover together by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/331
- Fixes [#333] by quickly checking if can reach endpoint using requests by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/334
- ChatAll: stream as fast as one can with short timeout=0.01 to avoid stalling any single endpoint to appear in UI by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/335
- Handle exceptions better when doing multi-view model lock, and don't block good endpoints by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/337
- Queue input to avoid fresh submit using input_list at click/enter time, else truncates result because it uses input_list from time of click by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/341
- Cleanup gradio UI a bit, ask at top so not lost at bottom by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/344
- Log extra information, and fix max_max_new_tokens by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/347
- Fix prompting for gradio->gradio by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/348
- Reset client hash every client call, and reset client state if server changes for when want stateless client by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/351
- If have HF model/tokenizer, use that instead of faketokenizer (tiktoken) since see too large differences and failures even with 250 token buffer, still of by another 350. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/352
- If only can add to MyData, automatically add without having to click button by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/353
- Get number of tokens after limited, although before prompt_type is applied, to reduce max_new_tokens for OpenAI by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/366
- Typo: One some systems -> On some systems by @ernstvanderlinden in https://github.com/h2oai/h2ogpt/pull/377
- 8bit mode command fix FAQ.md by @0xraks in https://github.com/h2oai/h2ogpt/pull/375
- Fixes [#368] by rotating session has for streaming too. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/384
- Add a docker runtime, to be used to run h2o gpt models by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/381
- Update README_LangChain.md by @cimadure in https://github.com/h2oai/h2ogpt/pull/387
- Fixes [#382] for offloading llama to GPU using llama.cpp. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/393
- Fix nochat by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/394
- Client revamp by @this in https://github.com/h2oai/h2ogpt/pull/349
- Update README_CPU.md by @wienke in https://github.com/h2oai/h2ogpt/pull/402
- Add summarization action by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/365
- Move files to src by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/406
- Add test to the Client to validate parameters order with h2ogpt by @this in https://github.com/h2oai/h2ogpt/pull/392
- Tweaks for MAC M1 by @Mathanraj-Sharma in https://github.com/h2oai/h2ogpt/pull/408
- Add AutoGPTQ -- Fixes [#263] and Fixes [#339] and Fixes [#417] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/416
- Fix prompt answer after broken after vicuna addition by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/422
- Improve UI and UX -- Fixes [#285] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/429
- FAQ.md - make the information about hugging face models stand out + fix for the prompter link by @fazpu in https://github.com/h2oai/h2ogpt/pull/435
- [DOCS] Fix typos and improve readability (FAQ page) by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/441
- Autoset langchain_mode if not passed by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/436
- Fix attribute error for NoneType - Python 3.9 by @Mathanraj-Sharma in https://github.com/h2oai/h2ogpt/pull/445
- Add vLLM support -- Fixes [#312] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/454
- add more deps to docker by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/452
- Update Windows bitsandbytes wheel link by @jllllll in https://github.com/h2oai/h2ogpt/pull/458
- Update docs for llama metal support by @Mathanraj-Sharma in https://github.com/h2oai/h2ogpt/pull/470
- [DevOps] Update README for docker runtime image consumption by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/477
- Fix typo in timeout_iterator.py by @eltociear in https://github.com/h2oai/h2ogpt/pull/479
- Add E2E test for fine-tuning/export/tgi, exposed issue with TGI 0.9.1, works in 0.8.2/0.9.2 by @arnocandel in https://github.com/h2oai/h2ogpt/pull/424
- Use latest bitsandbytes and accelerate. by @arnocandel in https://github.com/h2oai/h2ogpt/pull/485
- Revert "Use latest bitsandbytes and accelerate." by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/488
- Update main readme for docker runtime consumption by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/489
- Package more modules to python wheel by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/492
- Add note to install torch 2.1 for MPS by @Mathanraj-Sharma in https://github.com/h2oai/h2ogpt/pull/491
- UI is spread to the full width by @fazpu in https://github.com/h2oai/h2ogpt/pull/495
- add prompt template for llama2 by @arnocandel in https://github.com/h2oai/h2ogpt/pull/494
- Fixes [#398] -- Custom collection/db and user_path, persisted to disk f… by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/476
- Remove system prompt for llama2, too guarded. by @arnocandel in https://github.com/h2oai/h2ogpt/pull/506
- Minor Shell Script Changes by @slycordinator in https://github.com/h2oai/h2ogpt/pull/487
- Add tagging docker runtime with semver by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/500
- Update readme for semver docker runtime image by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/511
- Fixes [#446] Control chat history being added to context for langchain or not by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/507
- LLaMa2 with AutoGPTQ and 16-bit mode with RoPE scaling by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/517
- Fixes [#514] Fix llama2 prompting by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/523
- Support exllama by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/526
- Exclude unnecessary files & directories from wheel by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/537
- Improve docs by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/542
- feat: expose 'root_path' from gradio by @jcatana in https://github.com/h2oai/h2ogpt/pull/547
- Add long context tests and fix tokenizer truncation for activated rope_scaling by @arnocandel in https://github.com/h2oai/h2ogpt/pull/524
- Set max_seq_len outside config, config can't always set due to protections in class by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/554
- Add latest tag to docker runtime by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/560
- Load args from env vars as long as var starts with H2OGPT_ by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/556
- Isolate n_gpus base handling for CUDA from MPS by @Mathanraj-Sharma in https://github.com/h2oai/h2ogpt/pull/563
- Efficient parallel summarization and use full docs, not vectordb chunks. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/551
- Unblock streaming for multi-stream case by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/577
- Use docker entrypoint args instead of custom entry point by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/521
- Minor docs update by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/591
- Control embedding migration by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/585
- Fix client's ValueError: An event handler (fn) didn't receive enough input values (needed: 34, got: 32). by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/600
- Upgrade gradio-client version to v0.3.0 in the Client by @this in https://github.com/h2oai/h2ogpt/pull/601
- Test docker of TGI + h2oGPT dockers by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/602
- docs: Add GCR link to the Docker README by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/604
- Better handling of pdfs if broken by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/608
- Add replicate support, Fixes [#603] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/606
- Document how to disable chroma telemetry by @mmalohlava in https://github.com/h2oai/h2ogpt/pull/543
- Use
/submit_nochat_api
for the Text Completion API in the Client by @this in https://github.com/h2oai/h2ogpt/pull/609 - Allow server to save history.json with requests headers by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/613
- Fine tune llama2 by @arnocandel in https://github.com/h2oai/h2ogpt/pull/574
- [Client] Parse the return value from
/submit_nochat_api
to extract the response by @this in https://github.com/h2oai/h2ogpt/pull/627 - Handle persistence of user states for personal/scratch spaces. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/618
- Windows installer by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/647
- Ensure meta data in response and Fixes [#649] and upgrade gradio by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/653
- Fix docker permissions and allow using a non root user by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/664
- some fixes for docker run by @zba in https://github.com/h2oai/h2ogpt/pull/659
- Minor grammatical changes by @anfrd in https://github.com/h2oai/h2ogpt/pull/663
- mac install readme updated - the tessaract command by @fazpu in https://github.com/h2oai/h2ogpt/pull/665
- the message copy button moved closer to top border and message padding increased to 16px by @fazpu in https://github.com/h2oai/h2ogpt/pull/666
- Azure OpenAI by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/667
- the copy button is placed at the bottom of each message by @fazpu in https://github.com/h2oai/h2ogpt/pull/668
- Fixes for conda installation issues by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/681
- Improve bitsandbytes usage and control in UI by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/682
- Add pandas as a direct dependency for vLLM by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/686
- GitHub Action Workflow to Publish Python Package by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/670
- Fixes [#678] and Fixes [#451] and Fixes [#434] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/690
- Adding install Git command to windows installation instructions by @ceriseghost in https://github.com/h2oai/h2ogpt/pull/698
- Fixes [#709] -- improve in-context learning control by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/720
- add build id as a docker tag (will make it easier to trace in CI history) by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/719
- Improve offline caching by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/715
- Make sure cache directory is consistent, and is pointing to /workspace/.cache by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/716
- Add performance benchmarks. by @arnocandel in https://github.com/h2oai/h2ogpt/pull/648
- Better control over prompting for document Q/A by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/721
- fix: Modify JavaScript code generation to be compatible with Gradio Blocks. by @mmalohlava in https://github.com/h2oai/h2ogpt/pull/696
- explicitly set additional cache directories to be under ~/.cache (or /workspace/.cache) by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/728
- change doc to run with local host user, so local host cache can be reused. by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/730
- More documentation updates by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/732
- Add vLLM in docker by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/714
- Fix llama2 by @arnocandel in https://github.com/h2oai/h2ogpt/pull/747
- Fix Llama2 7B fine-tuning by @arnocandel in https://github.com/h2oai/h2ogpt/pull/644
- Add ability to control quality-effort of ingestion/parsing and add support for json, jsonl, gzip by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/737
- Fix make_db.py from docker and document in readme by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/750
- [Docs] Change docker image name in vllm by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/753
- adding first draft of doctr integration by @ryanchesler in https://github.com/h2oai/h2ogpt/pull/752
- Rc/#762 fixes file upload hanging on UI by @ryanchesler in https://github.com/h2oai/h2ogpt/pull/765
- Added prompter entries for lmsys/vicuna-7b-v1.5, lmsys/vicuna-13-v1.5… by @patrickhwood in https://github.com/h2oai/h2ogpt/pull/756
- don't set envs, just keep the defaults from HOME env var by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/733
- [DOCS] Fix the link to offline README by @muendelezaji in https://github.com/h2oai/h2ogpt/pull/778
- Added the option to create OCRed documents that are layout aware by @ryanchesler in https://github.com/h2oai/h2ogpt/pull/779
- h2oGPT Helm Chart by @EshamAaqib in https://github.com/h2oai/h2ogpt/pull/770
- doc(macos): pin
llama-cpp-python
version for support GGML by @iam4x in https://github.com/h2oai/h2ogpt/pull/780 - Fixes [#703] Bugfix: Broken multilanguage output by @Mins0o in https://github.com/h2oai/h2ogpt/pull/790
- added pix2struct by @ryanchesler in https://github.com/h2oai/h2ogpt/pull/792
- Fixes [#508] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/805
- attach button added to the prompt form by @fazpu in https://github.com/h2oai/h2ogpt/pull/674
- DocTR handling of pdfs by @ryanchesler in https://github.com/h2oai/h2ogpt/pull/787
- Improve docker layer caching to reduce overall image size by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/803
- Add prompt type for Falcon-180B(-chat) by @arnocandel in https://github.com/h2oai/h2ogpt/pull/806
- [DevOps] Packer scripts for Azure, GCP & Jenkins pipeline by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/788
- Add softlink to preserve compatibility with old commands from docs and readme(s) by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/808
- consolidate install script in one place, speed up build, + fix caching for TGI and vLLM by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/813
- Rebuild duckdb with control over threads to avoid excessive threads per db when system has large core count by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/810
- enable_pdf_doctr in utils by @ffalkenberg in https://github.com/h2oai/h2ogpt/pull/819
- Allow choose model from UI and client via model_active_choice option when using model_lock by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/820
- Merge nochat API model_active_choice with visible_models by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/823
- landing screens components re-ordered on mobile screens by @fazpu in https://github.com/h2oai/h2ogpt/pull/827
- header styling changed on mobile screen by @fazpu in https://github.com/h2oai/h2ogpt/pull/829
- prompt area and upload button adjusted for mobile screens by @fazpu in https://github.com/h2oai/h2ogpt/pull/830
- visible models don't have the remove-all button by @fazpu in https://github.com/h2oai/h2ogpt/pull/831
- Build duckdb using manylinux by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/834
- Bump helm chart build to 85. by @tomkraljevic in https://github.com/h2oai/h2ogpt/pull/833
- Fix build tag by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/836
- Update to new chroma to fix DB corruption issues by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/837
- labels are brighter by @fazpu in https://github.com/h2oai/h2ogpt/pull/818
- app styling updated by @fazpu in https://github.com/h2oai/h2ogpt/pull/856
- Keyed access by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/850
- css cleanup - two unused css id definitions removed by @fazpu in https://github.com/h2oai/h2ogpt/pull/852
- dark theme - secondary button styling improved, label background color a bit lighter by @fazpu in https://github.com/h2oai/h2ogpt/pull/857
- helm chart improvements by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/825
- Simplify system_prompt, no more separate use_system_prompt, and ensur…e pass through that to all models that take system prompt, e.g. openai, replicate if supported, llama2, beluga, falcon180 by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/867
- Allow pre-appending chat conversation by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/869
- Chore: Add printing Makefile variables by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/872
- Fixes [#873] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/874
- Better prepare offline docs and code by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/877
- Add text_context_list to directly pass text lists to LLM to avoid db etc. steps if don't care about persisting state and just want LLM to use context as if uploaded docs by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/879
- Fix locking by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/883
- Account for prompt when counting tokens in prompt template by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/891
- Remove extra wget by @lamw in https://github.com/h2oai/h2ogpt/pull/892
- Move the
h2ogpt_key
param to the constructor of theClient
by @this in https://github.com/h2oai/h2ogpt/pull/899 - Allow rw to /workspace data by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/902
- more cleanup to docker build scripts by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/903
- Web search and Agents by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/858
- configure update strategy by @lweren in https://github.com/h2oai/h2ogpt/pull/909
- Standardizes
--llamacpp_dict
usage in docs by @jamesbraza in https://github.com/h2oai/h2ogpt/pull/845 - [DevOps] Build wheel after modifying version in workflow by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/921
- fix volume mounts by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/919
- Fix handling of chat_conversation+system prompt using doing langchain by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/920
- Bump Helm Version by @EshamAaqib in https://github.com/h2oai/h2ogpt/pull/824
- Fix OpenAI summarization and use of text_context_list prompting and simplify code by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/924
- Speed-up sim search if only doing chunk_id filter. Speed-up other various tasks if large db. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/929
- Update docs for MACOS MPS by @Mathanraj-Sharma in https://github.com/h2oai/h2ogpt/pull/911
- Update Windows Installer files for October 2023 by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/930
- Add docker compose for vllm and when running on CPU mode by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/927
- add extra env variables by @lweren in https://github.com/h2oai/h2ogpt/pull/940
- Hk/main/benchmark plots by @hemenkapadia in https://github.com/h2oai/h2ogpt/pull/941
- Improve summarization and add extraction -- speed-up streaming by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/935
- External LLM Support - Helm by @EshamAaqib in https://github.com/h2oai/h2ogpt/pull/944
- Improve airgapped cache by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/952
- Add AWQ by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/954
- Bump helm chart version by @EshamAaqib in https://github.com/h2oai/h2ogpt/pull/957
- Update README_ui.md by @squidwardthetentacles in https://github.com/h2oai/h2ogpt/pull/925
- Improve timeout via max_time in UI/API by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/958
- [DOCS] Improve FAQ readability by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/959
- Ensure clone takes into account client inside endpoints. Persist client typically unless can't or don't request, since always using clone now. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/966
- [DOCS] Improve readability of README (second edit) by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/968
- [DOCS] Improve readability of INSTALL.md by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/971
- Add attention_sinks support for arbitrarily long generation by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/973
- Add gputil python package by @tomkraljevic in https://github.com/h2oai/h2ogpt/pull/982
- Fixed typo gpu_mem_track.py modelling_RW_falcon40b.py modelling_RW_falcon7b.py by @AniketP04 in https://github.com/h2oai/h2ogpt/pull/981
- Fixed typo timeout_iterator.py by @AniketP04 in https://github.com/h2oai/h2ogpt/pull/988
- Catch exception if not quite in job.future._exception and raise up to gradio for adding to chat exceptions in UI or raise direct if API. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/989
- Add prompt and test for https://huggingface.co/BAAI/AquilaChat2-34B-16K and related chat models by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/986
- fix the problem with image pull secrets by @lweren in https://github.com/h2oai/h2ogpt/pull/992
- relax max_new_tokens to be per prompt by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/998
- Avoid system OOM when too many pages for doctr by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/999
- Implement HYDE by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1004
- Use migration-safe
/submit_nochat_api
for the ChatCompletion API by @this in https://github.com/h2oai/h2ogpt/pull/1010 - Add
client.list_models()
method by @this in https://github.com/h2oai/h2ogpt/pull/1012 - Update get_limited_prompt and use tokenizer from llama.cpp directly by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1015
- Typo by @MSZ-MGS in https://github.com/h2oai/h2ogpt/pull/1016
- init container image override by @lweren in https://github.com/h2oai/h2ogpt/pull/1019
- Fix source file link by @us8945 in https://github.com/h2oai/h2ogpt/pull/1024
- For codellama or other JSON friendly models, stack system prompt with instructions and give document chunks in json, and ask for json output. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/978
- Refactor models API in the Client by @this in https://github.com/h2oai/h2ogpt/pull/1026
- Rename
models
param tomodel
in the Client by @this in https://github.com/h2oai/h2ogpt/pull/1029 - Update client/README.md by @surenH2oai in https://github.com/h2oai/h2ogpt/pull/1031
- One click installer setup for MacOS by @Mathanraj-Sharma in https://github.com/h2oai/h2ogpt/pull/1033
- Add
client.server
API to the Client by @this in https://github.com/h2oai/h2ogpt/pull/1036 - [Client] Refactor classes in the completion APIs into a separate sub-module by @this in https://github.com/h2oai/h2ogpt/pull/1039
- Make Helm work with external and local LLM's by @EshamAaqib in https://github.com/h2oai/h2ogpt/pull/1034
- Add annotations to h2ogpt web svc by @ozahavi in https://github.com/h2oai/h2ogpt/pull/1044
- [Docs] Add downloading client from the GH release by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/1042
- [Client] Add streaming support for the text completion API by @this in https://github.com/h2oai/h2ogpt/pull/1046
- Various summarization/extraction fixes + easier llama.cpp control + redesign of Models UI by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1045
- Allow multiple llama, but llama.cpp is not thread safe, so only allowed if doing inference server for all but one. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1050
- Update links by @arnocandel in https://github.com/h2oai/h2ogpt/pull/1056
- Windows one-click Nov5 by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1055
- made changes singtel requested by @overaneout in https://github.com/h2oai/h2ogpt/pull/1038
- [DevOps] Cloud Image Fixes by @ChathurindaRanasinghe in https://github.com/h2oai/h2ogpt/pull/1057
- Add configs related to MPS for one click installer by @Mathanraj-Sharma in https://github.com/h2oai/h2ogpt/pull/1060
- Add Mac one click installer to README - NOV 08, 2023 by @Mathanraj-Sharma in https://github.com/h2oai/h2ogpt/pull/1064
- Fix prompting in langchain pandas csv agents, missing format_instructions and uses mrkl prompt even if make class on top, and no way to work around by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1058
- Youtube and local audio transcription by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1070
- Commands to give permissions for Mac one-click installers by @Mathanraj-Sharma in https://github.com/h2oai/h2ogpt/pull/1071
- Fix preload of ASR and allow embedding model to be on any GPU by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1074
- Parse files inside tar.gz by @Mathanraj-Sharma in https://github.com/h2oai/h2ogpt/pull/1073
- Reorganize UI a bit, and make it easier to upload url vs. text, autodetect by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1075
- Add deepseek coder prompt by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1083
- Update README_offline.md by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/1096
- [DOCS] Improve readability of README_ui.md by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/1103
- Streaming Speech-to-Text (STT) and Streaming Text-to-Speech (TTS) with Voice Cloning and Hands-Free Chat by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1089
- Update README.md (cosmetics) by @MSZ-MGS in https://github.com/h2oai/h2ogpt/pull/1104
- Fix Typo in FAQ by @daanknoope in https://github.com/h2oai/h2ogpt/pull/1112
- [DOCS] Client APIs README typo fixes and readability edit by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/1117
- [HELM] Remove default values from overrideConfig by @EshamAaqib in https://github.com/h2oai/h2ogpt/pull/1120
- [DOCS] Improve GPU readme by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/1141
- Upgrade to gradio4 by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1110
- For Issue [#1142], not a specific fix yet. Noticed documents that failed to parse were coming up as selectable documents. Fix that. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1150
- More for Issue [#1142] -- allow filter files and content by substrings and operations and/or by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1151
- Return prompt_raw so e.g. LLM and langchain prompting with docs can be seen by API by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1152
- Update gpt_langchain.py to support Youtube Shorts by @cherrerajobs in https://github.com/h2oai/h2ogpt/pull/1154
- web scrape by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1156
- Chunk streaming to help speed due to gradio UI slowness/bugs by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1162
- Use openai v1 for vllm by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1164
- [DOCS] Improve Linux readme by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/1149
- More streaming optimizations for good UX by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1171
- Gradio API call examples by @us8945 in https://github.com/h2oai/h2ogpt/pull/1174
- Make Claude and other non-sytem prompt models use chat history to mimic system prompt by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1177
- Minor doc improvements by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/1187
- Remove call to ngpus and openai/vllm client creation so faster when using by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1192
- Add video frame extraction, image chat, and image generation by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1181
- [HELM] Add option to run vLLM and h2oGPT on same pod by @EshamAaqib in https://github.com/h2oai/h2ogpt/pull/1194
- Gemini by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1208
- [DOCS] Minor doc fixes and improvements by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/1205
- Support docsgpt https://huggingface.co/Arc53/docsgpt-7b-mistral by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1215
- improve streaming and error logging by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1218
- docs: faq: document auth.json file format by @Blacksuan19 in https://github.com/h2oai/h2ogpt/pull/1206
- Use transformers version of attention sinks: https://github.com/huggingface/transformers/releases/tag/v4.36.0 by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1219
- Allow private model that fails to load to not revert tokenizer to None if passed tokenizer_base_model by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1223
- hide action selection if only one action is enabled by @Blacksuan19 in https://github.com/h2oai/h2ogpt/pull/1224
- add ability to set custom page title and favicon by @Blacksuan19 in https://github.com/h2oai/h2ogpt/pull/1225
- OpenAI Proxy Server redirects to Gradio Server by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1231
- Improve testing for OpenAI server and fix key issues with auth etc. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1234
- Handle errors better for OpenAI client by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1235
- Reachout by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1236
- Allow langchain for eval and add test -- Fixes [#1244] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1246
- Allow persistence for GradioClient for Issue [#1247] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1249
- Fixes [#1247] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1251
- Go back to checking system hash since stored in docker image now, even if takes 0.2s, worth it. Could delay checks to every minute or something, but more risky. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1253
- [HELM] Add vLLM check when running as stack by @EshamAaqib in https://github.com/h2oai/h2ogpt/pull/1255
- Control llava prompt by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1262
- Remove HYDE accordion outputs if present before giving history to LLM, and remove chat=True/False for prompt generation, hold-over and led to bugs in prompting for gradio->gradio by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1263
- Better exceptions docview by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1264
- Fixes to Helm Chart by @EshamAaqib in https://github.com/h2oai/h2ogpt/pull/1269
- use docker compose with a Dockerfile to force rebuild if new by @achraf-mer in https://github.com/h2oai/h2ogpt/pull/1273
- Windows update Jan 8, 2024 by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1272
- minor package upgrades by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1275
- MistralAI by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1290
- Enforce allow_upload_to_user_data and allow_upload_to_my_data -- Fixes [#1296] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1297
- Update README_MACOS.md by @antoninadert in https://github.com/h2oai/h2ogpt/pull/1292
- [DOCS] readme minor readability improvements by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/1299
- Ensure parameters for OpenAI->h2oGPT are transcribed. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1301
- Rotate image before OCR/DocTR - WIP by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1239
- Allow API call for conversion of text to audio by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1310
- Update README_MACOS.md by @antoninadert in https://github.com/h2oai/h2ogpt/pull/1308
- More API protection by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1314
- [HELM] Fix
PodLabels
by @EshamAaqib in https://github.com/h2oai/h2ogpt/pull/1318 - Update README_MACOS.md by @antoninadert in https://github.com/h2oai/h2ogpt/pull/1325
- exposing imagePullSecret and tag in values.yaml by @robinliubin in https://github.com/h2oai/h2ogpt/pull/1328
- Update QR code. by @arnocandel in https://github.com/h2oai/h2ogpt/pull/1336
- h2ogpt support namespaceOverride by @robinliubin in https://github.com/h2oai/h2ogpt/pull/1337
- Update docker for better vllm support, go to higher cuda for cuda kernels to exist by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1339
- Some package updates by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1344
- Add verifier -- only via API for now by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1267
- fix-33075_adding_shared_memory by @robinliubin in https://github.com/h2oai/h2ogpt/pull/1352
- Cu121 by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1368
- Add vision models as llms by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1369
- Upgrade to gradio4 3rd attempt by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1380
- Increase timeout when have failure to make sure we know the reason. by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1384
- Faster for llava by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1392
- Fixes [#1270] by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1396
- [DOCS] Minor doc improvements by @zainhaq-h2o in https://github.com/h2oai/h2ogpt/pull/1402
- Fixes [#1324] -- clear memory when browser tab closes by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1407
- Fix TEI use of HuggingFaceHubEmbeddings by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1424
- Fix login if chatbot counts differ from in auth file by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1429
- Improve auth/login for OpenAI API and fix AWQ by @pseudotensor in https://github.com/h2oai/h2ogpt/pull/1434
- GPT's user review functionality added by @Darshan-Malaviya in https://github.com/h2oai/h2ogpt/pull/1436
- [HELM] Add option to disable anti affinity by @EshamAaqib in https://github.com/h2oai/h2ogpt/pull/1423
- Update MacOS doc with information related to BFloat16 error by @Mathanraj-Sharma in https://github.com/h2oai/h2ogpt/pull/1442
New Contributors
- @lo5 made their first contribution in https://github.com/h2oai/h2ogpt/pull/64
- @cpatrickalves made their first contribution in https://github.com/h2oai/h2ogpt/pull/63
- @jefffohl made their first contribution in https://github.com/h2oai/h2ogpt/pull/68
- @eltociear made their first contribution in https://github.com/h2oai/h2ogpt/pull/70
- @ChathurindaRanasinghe made their first contribution in https://github.com/h2oai/h2ogpt/pull/131
- @orellavie1212 made their first contribution in https://github.com/h2oai/h2ogpt/pull/132
- @hsm207 made their first contribution in https://github.com/h2oai/h2ogpt/pull/218
- @this made their first contribution in https://github.com/h2oai/h2ogpt/pull/242
- @fazpu made their first contribution in https://github.com/h2oai/h2ogpt/pull/256
- @3x0dv5 made their first contribution in https://github.com/h2oai/h2ogpt/pull/274
- @parkeraddison made their first contribution in https://github.com/h2oai/h2ogpt/pull/328
- @ernstvanderlinden made their first contribution in https://github.com/h2oai/h2ogpt/pull/377
- @0xraks made their first contribution in https://github.com/h2oai/h2ogpt/pull/375
- @cimadure made their first contribution in https://github.com/h2oai/h2ogpt/pull/387
- @wienke made their first contribution in https://github.com/h2oai/h2ogpt/pull/402
- @jllllll made their first contribution in https://github.com/h2oai/h2ogpt/pull/458
- @slycordinator made their first contribution in https://github.com/h2oai/h2ogpt/pull/487
- @jcatana made their first contribution in https://github.com/h2oai/h2ogpt/pull/547
- @mmalohlava made their first contribution in https://github.com/h2oai/h2ogpt/pull/543
- @zba made their first contribution in https://github.com/h2oai/h2ogpt/pull/659
- @anfrd made their first contribution in https://github.com/h2oai/h2ogpt/pull/663
- @ceriseghost made their first contribution in https://github.com/h2oai/h2ogpt/pull/698
- @ryanchesler made their first contribution in https://github.com/h2oai/h2ogpt/pull/752
- @patrickhwood made their first contribution in https://github.com/h2oai/h2ogpt/pull/756
- @muendelezaji made their first contribution in https://github.com/h2oai/h2ogpt/pull/778
- @iam4x made their first contribution in https://github.com/h2oai/h2ogpt/pull/780
- @Mins0o made their first contribution in https://github.com/h2oai/h2ogpt/pull/790
- @ffalkenberg made their first contribution in https://github.com/h2oai/h2ogpt/pull/819
- @tomkraljevic made their first contribution in https://github.com/h2oai/h2ogpt/pull/833
- @lamw made their first contribution in https://github.com/h2oai/h2ogpt/pull/892
- @lweren made their first contribution in https://github.com/h2oai/h2ogpt/pull/909
- @jamesbraza made their first contribution in https://github.com/h2oai/h2ogpt/pull/845
- @hemenkapadia made their first contribution in https://github.com/h2oai/h2ogpt/pull/941
- @squidwardthetentacles made their first contribution in https://github.com/h2oai/h2ogpt/pull/925
- @AniketP04 made their first contribution in https://github.com/h2oai/h2ogpt/pull/981
- @MSZ-MGS made their first contribution in https://github.com/h2oai/h2ogpt/pull/1016
- @us8945 made their first contribution in https://github.com/h2oai/h2ogpt/pull/1024
- @surenH2oai made their first contribution in https://github.com/h2oai/h2ogpt/pull/1031
- @ozahavi made their first contribution in https://github.com/h2oai/h2ogpt/pull/1044
- @overaneout made their first contribution in https://github.com/h2oai/h2ogpt/pull/1038
- @daanknoope made their first contribution in https://github.com/h2oai/h2ogpt/pull/1112
- @cherrerajobs made their first contribution in https://github.com/h2oai/h2ogpt/pull/1154
- @Blacksuan19 made their first contribution in https://github.com/h2oai/h2ogpt/pull/1206
- @antoninadert made their first contribution in https://github.com/h2oai/h2ogpt/pull/1292
- @Darshan-Malaviya made their first contribution in https://github.com/h2oai/h2ogpt/pull/1436
Full Changelog: https://github.com/h2oai/h2ogpt/commits/0.2.0