【AI繪畫(huà)】最佳人物模型訓(xùn)練!保姆式LoRA模型訓(xùn)練教程 一鍵包發(fā)布

代碼:
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used
'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.self_attn.v_proj.bias']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
loading text encoder: <All keys matched successfully>
Replace CrossAttention.forward to use xformers
Traceback (most recent call last):
?File "C:\Users\Bernheim\Desktop\lora訓(xùn)練\lora-scripts\sd-scripts\library\train_util.py", line 1308, in replace_unet_cross_attn_to_xformers
??import xformers.ops
ModuleNotFoundError: No module named 'xformers'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
?File "C:\Users\Bernheim\Desktop\lora訓(xùn)練\lora-scripts\sd-scripts\train_network.py", line 548, in <module>
??train(args)
?File "C:\Users\Bernheim\Desktop\lora訓(xùn)練\lora-scripts\sd-scripts\train_network.py", line 159, in train
??train_util.replace_unet_modules(unet, args.mem_eff_attn, args.xformers)
?File "C:\Users\Bernheim\Desktop\lora訓(xùn)練\lora-scripts\sd-scripts\library\train_util.py", line 1262, in replace_unet_modules
??replace_unet_cross_attn_to_xformers()
?File "C:\Users\Bernheim\Desktop\lora訓(xùn)練\lora-scripts\sd-scripts\library\train_util.py", line 1310, in replace_unet_cross_attn_to_xformers
??raise ImportError("No xformers / xformersがインストールされていないようです")
ImportError: No xformers / xformersがインストールされていないようです
Traceback (most recent call last):
?File "C:\Users\Bernheim\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
??return _run_code(code, main_globals, None,
?File "C:\Users\Bernheim\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
??exec(code, run_globals)
?File "C:\Users\Bernheim\Desktop\lora訓(xùn)練\lora-scripts\venv\Scripts\accelerate.exe\__main__.py", line 7, in <module>
?File "C:\Users\Bernheim\Desktop\lora訓(xùn)練\lora-scripts\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
??args.func(args)
?File "C:\Users\Bernheim\Desktop\lora訓(xùn)練\lora-scripts\venv\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command
??simple_launcher(args)
?File "C:\Users\Bernheim\Desktop\lora訓(xùn)練\lora-scripts\venv\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher
??raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['C:\\Users\\Bernheim\\Desktop\\lora訓(xùn)練\\lora-scripts\\venv\\Scripts\\python.exe', './sd-scripts/train_network.py', '--enable_bucket', '--pretrained_model_name_or_path=./sd-models/model.ckpt', '--train_data_dir=./train/alhatham', '--output_dir=./output', '--logging_dir=./logs', '--resolution=384,384', '--network_module=networks.lora', '--max_train_epochs=10', '--learning_rate=1e-4', '--unet_lr=1e-4', '--text_encoder_lr=1e-5', '--lr_scheduler=cosine_with_restarts', '--lr_warmup_steps=0', '--network_dim=32', '--network_alpha=32', '--output_name=alhatham', '--train_batch_size=2', '--save_every_n_epochs=2', '--mixed_precision=fp16', '--save_precision=fp16', '--seed=1337', '--cache_latents', '--clip_skip=2', '--prior_loss_weight=1', '--max_token_length=225', '--caption_extension=.txt', '--save_model_as=ckpt', '--min_bucket_reso=256', '--max_bucket_reso=1024', '--xformers', '--shuffle_caption', '--use_8bit_adam', '--network_train_unet_only']' returned non-zero exit status 1.
Train finished


