skip to Main Content

Ubuntu – HuggingFacePipeline and Langchain

this is my current code: from langchain.llms import HuggingFacePipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, BitsAndBytesConfig from langchain import PromptTemplate, LLMChain import torch model_id = "../models/openbuddy-llama2-34b-v11.1-bf16" tokenizer = AutoTokenizer.from_pretrained(model_id) nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=False, max_memory=24000 ) model =…

VIEW QUESTION

How to use AWS Sagemaker with newer version of Huggingface Estimator? – Docker

When trying to use Huggingface estimator on sagemaker, Run training on Amazon SageMaker e.g. # create the Estimator huggingface_estimator = HuggingFace( entry_point='train.py', source_dir='./scripts', instance_type='ml.p3.2xlarge', instance_count=1, role=role, transformers_version='4.17', pytorch_version='1.10', py_version='py38', hyperparameters = hyperparameters ) When I tried to increase the version…

VIEW QUESTION
Back To Top
Search