I try to use testscript.py to test DLC environment is work or not.
But is show some error and stuck at ”Starting training….”.
The following is what I do:
- A win11 computer with i7-12700 and RTX3060
- Install Anaconda
- Install CUDA 11.2 and cuDNN 8.1
- Create environment by using offical DEEPLABCUT.yaml file
- Enter environment and pip install Torch
- Check if TF can use GPU
- Run testscripy.py and stuck at "Starting training…"
Please help me solve this problem.
I think that may cause by some outdated packages.
The packages in environment:
# packages in environment at C:Appanaconda3envsDEEPLABCUT:
#
# Name Version Build Channel
absl-py 1.2.0 pypi_0 pypi
aom 3.4.0 h0e60522_1 conda-forge
argon2-cffi 21.3.0 pyhd8ed1ab_0 conda-forge
argon2-cffi-bindings 21.2.0 py38h294d835_2 conda-forge
asttokens 2.0.8 pyhd8ed1ab_0 conda-forge
astunparse 1.6.3 pypi_0 pypi
attrs 22.1.0 pyh71513ae_1 conda-forge
backcall 0.2.0 pyh9f0ad1d_0 conda-forge
backports 1.0 py_2 conda-forge
backports.functools_lru_cache 1.6.4 pyhd8ed1ab_0 conda-forge
beautifulsoup4 4.11.1 pyha770c72_0 conda-forge
bleach 5.0.1 pyhd8ed1ab_0 conda-forge
bzip2 1.0.8 h8ffe710_4 conda-forge
ca-certificates 2022.6.15.1 h5b45459_0 conda-forge
cachetools 5.2.0 pypi_0 pypi
certifi 2022.6.15.1 pypi_0 pypi
cffi 1.15.1 py38hd8c33c5_0 conda-forge
charset-normalizer 2.1.1 pypi_0 pypi
colorama 0.4.5 pyhd8ed1ab_0 conda-forge
cycler 0.11.0 pypi_0 pypi
debugpy 1.6.3 py38h885f38d_0 conda-forge
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
deeplabcut 2.2.2 pypi_0 pypi
defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge
entrypoints 0.4 pyhd8ed1ab_0 conda-forge
executing 1.0.0 pyhd8ed1ab_0 conda-forge
expat 2.4.8 h39d44d4_0 conda-forge
ffmpeg 5.1.1 gpl_h7b28927_101 conda-forge
filterpy 1.4.5 pypi_0 pypi
flatbuffers 2.0.7 pypi_0 pypi
flit-core 3.7.1 pyhd8ed1ab_0 conda-forge
font-ttf-dejavu-sans-mono 2.37 hab24e00_0 conda-forge
font-ttf-inconsolata 3.000 h77eed37_0 conda-forge
font-ttf-source-code-pro 2.038 h77eed37_0 conda-forge
font-ttf-ubuntu 0.83 hab24e00_0 conda-forge
fontconfig 2.14.0 hce3cb01_0 conda-forge
fonts-conda-ecosystem 1 0 conda-forge
fonts-conda-forge 1 0 conda-forge
fonttools 4.37.1 pypi_0 pypi
freetype 2.12.1 h546665d_0 conda-forge
gast 0.4.0 pypi_0 pypi
gettext 0.19.8.1 ha2e2712_1008 conda-forge
glib 2.72.1 h7755175_0 conda-forge
glib-tools 2.72.1 h7755175_0 conda-forge
google-auth 2.11.0 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
grpcio 1.48.1 pypi_0 pypi
gst-plugins-base 1.20.3 h001b923_1 conda-forge
gstreamer 1.20.3 h6b5321d_1 conda-forge
h5py 3.7.0 pypi_0 pypi
icu 70.1 h0e60522_0 conda-forge
idna 3.3 pypi_0 pypi
imageio 2.21.2 pypi_0 pypi
imgaug 0.4.0 pypi_0 pypi
importlib-metadata 4.11.4 py38haa244fe_0 conda-forge
importlib_resources 5.9.0 pyhd8ed1ab_0 conda-forge
ipykernel 6.15.2 pyh025b116_0 conda-forge
ipython 8.5.0 pyh08f2357_1 conda-forge
ipython_genutils 0.2.0 py_1 conda-forge
ipywidgets 8.0.2 pyhd8ed1ab_1 conda-forge
jedi 0.18.1 pyhd8ed1ab_2 conda-forge
jinja2 3.1.2 pyhd8ed1ab_1 conda-forge
joblib 1.1.0 pypi_0 pypi
jpeg 9e h8ffe710_2 conda-forge
jsonschema 4.16.0 pyhd8ed1ab_0 conda-forge
jupyter 1.0.0 py38haa244fe_7 conda-forge
jupyter_client 7.3.5 pyhd8ed1ab_0 conda-forge
jupyter_console 6.4.4 pyhd8ed1ab_0 conda-forge
jupyter_core 4.11.1 py38haa244fe_0 conda-forge
jupyterlab_pygments 0.2.2 pyhd8ed1ab_0 conda-forge
jupyterlab_widgets 3.0.3 pyhd8ed1ab_0 conda-forge
keras 2.10.0 pypi_0 pypi
keras-preprocessing 1.1.2 pypi_0 pypi
kiwisolver 1.4.4 pypi_0 pypi
krb5 1.19.3 h1176d77_0 conda-forge
libclang 14.0.6 pypi_0 pypi
libclang13 14.0.6 default_h77d9078_0 conda-forge
libffi 3.4.2 h8ffe710_5 conda-forge
libglib 2.72.1 h3be07f2_0 conda-forge
libiconv 1.16 he774522_0 conda-forge
libogg 1.3.4 h8ffe710_1 conda-forge
libpng 1.6.37 h1d00b33_4 conda-forge
libsodium 1.0.18 h8d14728_1 conda-forge
libsqlite 3.39.3 hcfcfb64_0 conda-forge
libvorbis 1.3.7 h0e60522_0 conda-forge
libxml2 2.9.14 hf5bbc77_4 conda-forge
libxslt 1.1.35 h34f844d_0 conda-forge
libzlib 1.2.12 h8ffe710_2 conda-forge
llvmlite 0.39.1 pypi_0 pypi
lxml 4.9.1 py38h294d835_0 conda-forge
markdown 3.4.1 pypi_0 pypi
markupsafe 2.1.1 py38h294d835_1 conda-forge
matplotlib 3.5.3 pypi_0 pypi
matplotlib-inline 0.1.6 pyhd8ed1ab_0 conda-forge
mistune 2.0.4 pyhd8ed1ab_0 conda-forge
msgpack 1.0.4 pypi_0 pypi
msgpack-numpy 0.4.8 pypi_0 pypi
nb_conda 2.2.1 win_6 conda-forge
nb_conda_kernels 2.3.1 py38haa244fe_1 conda-forge
nbclient 0.6.8 pyhd8ed1ab_0 conda-forge
nbconvert 7.0.0 pyhd8ed1ab_0 conda-forge
nbconvert-core 7.0.0 pyhd8ed1ab_0 conda-forge
nbconvert-pandoc 7.0.0 pyhd8ed1ab_0 conda-forge
nbformat 5.4.0 pyhd8ed1ab_0 conda-forge
nest-asyncio 1.5.5 pyhd8ed1ab_0 conda-forge
networkx 2.8.6 pypi_0 pypi
notebook 6.4.12 pyha770c72_0 conda-forge
numba 0.56.2 pypi_0 pypi
numexpr 2.8.3 pypi_0 pypi
numpy 1.23.3 pypi_0 pypi
oauthlib 3.2.1 pypi_0 pypi
opencv-python 4.6.0.66 pypi_0 pypi
openh264 2.3.0 h0e60522_0 conda-forge
openssl 1.1.1q h8ffe710_0 conda-forge
opt-einsum 3.3.0 pypi_0 pypi
packaging 21.3 pyhd8ed1ab_0 conda-forge
pandas 1.4.4 pypi_0 pypi
pandoc 2.19.2 h57928b3_0 conda-forge
pandocfilters 1.5.0 pyhd8ed1ab_0 conda-forge
parso 0.8.3 pyhd8ed1ab_0 conda-forge
patsy 0.5.2 pypi_0 pypi
pcre 8.45 h0e60522_0 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 9.2.0 pypi_0 pypi
pip 22.2.2 pyhd8ed1ab_0 conda-forge
pkgutil-resolve-name 1.3.10 pyhd8ed1ab_0 conda-forge
ply 3.11 py_1 conda-forge
prometheus_client 0.14.1 pyhd8ed1ab_0 conda-forge
prompt-toolkit 3.0.31 pyha770c72_0 conda-forge
prompt_toolkit 3.0.31 hd8ed1ab_0 conda-forge
protobuf 3.19.4 pypi_0 pypi
psutil 5.9.2 py38h91455d4_0 conda-forge
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pygments 2.13.0 pyhd8ed1ab_0 conda-forge
pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge
pyqt 5.15.7 py38h75e37d8_0 conda-forge
pyqt5-sip 12.11.0 py38h885f38d_0 conda-forge
pyrsistent 0.18.1 py38h294d835_1 conda-forge
python 3.8.13 h9a09f29_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-fastjsonschema 2.16.1 pyhd8ed1ab_0 conda-forge
python_abi 3.8 2_cp38 conda-forge
pytz 2022.2.1 pypi_0 pypi
pywavelets 1.3.0 pypi_0 pypi
pywin32 303 py38h294d835_0 conda-forge
pywinpty 2.0.7 py38hd3f51b4_0 conda-forge
pyyaml 6.0 pypi_0 pypi
pyzmq 23.2.1 py38h09162b1_0 conda-forge
qt-main 5.15.6 hf0cf448_0 conda-forge
qtconsole 5.3.2 pyhd8ed1ab_0 conda-forge
qtconsole-base 5.3.2 pyha770c72_0 conda-forge
qtpy 2.2.0 pyhd8ed1ab_0 conda-forge
requests 2.28.1 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.9 pypi_0 pypi
ruamel-yaml 0.17.21 pypi_0 pypi
ruamel-yaml-clib 0.2.6 pypi_0 pypi
scikit-image 0.19.3 pypi_0 pypi
scikit-learn 1.1.2 pypi_0 pypi
scipy 1.9.1 pypi_0 pypi
send2trash 1.8.0 pyhd8ed1ab_0 conda-forge
setuptools 59.8.0 pypi_0 pypi
shapely 1.8.4 pypi_0 pypi
sip 6.6.2 py38h885f38d_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
soupsieve 2.3.2.post1 pyhd8ed1ab_0 conda-forge
sqlite 3.39.3 hcfcfb64_0 conda-forge
stack_data 0.5.0 pyhd8ed1ab_0 conda-forge
statsmodels 0.13.2 pypi_0 pypi
svt-av1 1.2.1 h0e60522_0 conda-forge
tables 3.7.0 pypi_0 pypi
tabulate 0.8.10 pypi_0 pypi
tensorboard 2.10.0 pypi_0 pypi
tensorboard-data-server 0.6.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
tensorflow 2.10.0 pypi_0 pypi
tensorflow-estimator 2.10.0 pypi_0 pypi
tensorflow-io-gcs-filesystem 0.27.0 pypi_0 pypi
tensorpack 0.11 pypi_0 pypi
termcolor 1.1.0 pypi_0 pypi
terminado 0.15.0 py38haa244fe_0 conda-forge
tf-slim 1.1.0 pypi_0 pypi
threadpoolctl 3.1.0 pypi_0 pypi
tifffile 2022.8.12 pypi_0 pypi
tinycss2 1.1.1 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h8ffe710_0 conda-forge
toml 0.10.2 pyhd8ed1ab_0 conda-forge
torch 1.12.1 pypi_0 pypi
tornado 6.2 py38h294d835_0 conda-forge
tqdm 4.64.1 pypi_0 pypi
traitlets 5.3.0 pyhd8ed1ab_0 conda-forge
typing_extensions 4.3.0 pyha770c72_0 conda-forge
ucrt 10.0.20348.0 h57928b3_0 conda-forge
urllib3 1.26.12 pypi_0 pypi
vc 14.2 hb210afc_7 conda-forge
vs2015_runtime 14.29.30139 h890b9b1_7 conda-forge
wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge
webencodings 0.5.1 py_1 conda-forge
werkzeug 2.2.2 pypi_0 pypi
wheel 0.37.1 pyhd8ed1ab_0 conda-forge
widgetsnbextension 4.0.3 pyhd8ed1ab_0 conda-forge
winpty 0.4.3 4 conda-forge
wrapt 1.14.1 pypi_0 pypi
wxpython 4.0.7.post2 pypi_0 pypi
x264 1!164.3095 h8ffe710_2 conda-forge
x265 3.5 h2d74725_3 conda-forge
xz 5.2.6 h8d14728_0 conda-forge
zeromq 4.3.4 h0e60522_1 conda-forge
zipp 3.8.1 pyhd8ed1ab_0 conda-forge
zstd 1.5.2 h7755175_4 conda-forge
I sure TF can use my GPU by this:
(DEEPLABCUT) C:Windowssystem32>python
Python 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 05:59:45) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from tensorflow.python.client import device_lib
>>> print(device_lib.list_local_devices())
2022-09-11 20:41:50.137638: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-09-11 20:41:50.481875: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /device:GPU:0 with 9616 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060, pci bus id: 0000:01:00.0, compute capability: 8.6
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 200595950863773239
xla_global_id: -1
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 10083106816
locality {
bus_id: 1
links {
}
}
incarnation: 14570387183940456862
physical_device_desc: "device: 0, name: NVIDIA GeForce RTX 3060, pci bus id: 0000:01:00.0, compute capability: 8.6"
xla_global_id: 416903419
]
Terminal stuck at "Starting training…"
(DEEPLABCUT) C:worksDLCDLC_scriptDeepLabCut-masterDeepLabCut-masterexamples>python testscript.py
Loading DLC 2.2.2...
Imported DLC!
On Windows/OSX tensorpack is not tested by default.
CREATING PROJECT
Created "C:worksDLCDLC_scriptDeepLabCut-masterDeepLabCut-masterexamplesTEST-Alex-2022-09-11videos"
Created "C:worksDLCDLC_scriptDeepLabCut-masterDeepLabCut-masterexamplesTEST-Alex-2022-09-11labeled-data"
Created "C:worksDLCDLC_scriptDeepLabCut-masterDeepLabCut-masterexamplesTEST-Alex-2022-09-11training-datasets"
Created "C:worksDLCDLC_scriptDeepLabCut-masterDeepLabCut-masterexamplesTEST-Alex-2022-09-11dlc-models"
Copying the videos
C:worksDLCDLC_scriptDeepLabCut-masterDeepLabCut-masterexamplesTEST-Alex-2022-09-11videosreachingvideo1.avi
Generated "C:worksDLCDLC_scriptDeepLabCut-masterDeepLabCut-masterexamplesTEST-Alex-2022-09-11config.yaml"
A new project with name TEST-Alex-2022-09-11 is created at C:worksDLCDLC_scriptDeepLabCut-masterDeepLabCut-masterexamples and a configurable file (config.yaml) is stored there. Change the parameters in this file to adapt to your project's needs.
Once you have changed the configuration file, use the function 'extract_frames' to select frames for labeling.
. [OPTIONAL] Use the function 'add_new_videos' to add new videos to your project (at any stage).
EXTRACTING FRAMES
Config file read successfully.
Extracting frames based on kmeans ...
Kmeans-quantization based extracting of frames from 0.0 seconds to 8.53 seconds.
Extracting and downsampling... 256 frames from the video.
256it [00:01, 214.77it/s]
Kmeans clustering ... (this might take a while)
Frames were successfully extracted, for the videos listed in the config.yaml file.
You can now label the frames using the function 'label_frames' (Note, you should label frames extracted from diverse videos (and many videos; we do not recommend training on single videos!)).
CREATING-SOME LABELS FOR THE FRAMES
Plot labels...
Creating images with labels by Alex.
100%|█████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 6.01it/s]
If all the labels are ok, then use the function 'create_training_dataset' to create the training dataset!
CREATING TRAININGSET
Downloading a ImageNet-pretrained model from https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/ckptsaug/efficientnet-b0.tar.gz....
The training dataset is successfully created. Use the function 'train_network' to start training. Happy training!
CHANGING training parameters to end quickly!
TRAIN
Selecting single-animal trainer
Config:
{'all_joints': [[0], [1], [2], [3]],
'all_joints_names': ['bodypart1', 'bodypart2', 'bodypart3', 'objectA'],
'alpha_r': 0.02,
'apply_prob': 0.5,
'batch_size': 1,
'contrast': {'clahe': True,
'claheratio': 0.1,
'histeq': True,
'histeqratio': 0.1},
'convolution': {'edge': False,
'emboss': {'alpha': [0.0, 1.0], 'strength': [0.5, 1.5]},
'embossratio': 0.1,
'sharpen': False,
'sharpenratio': 0.3},
'crop_pad': 0,
'cropratio': 0.4,
'dataset': 'training-datasets\iteration-0\UnaugmentedDataSet_TESTSep11\TEST_Alex80shuffle1.mat',
'dataset_type': 'default',
'decay_steps': 30000,
'deterministic': False,
'display_iters': 2,
'fg_fraction': 0.25,
'global_scale': 0.8,
'init_weights': 'C:\App\anaconda3\envs\DEEPLABCUT\lib\site-packages\deeplabcut\pose_estimation_tensorflow\models\pretrained\efficientnet-b0\model.ckpt',
'intermediate_supervision': False,
'intermediate_supervision_layer': 12,
'location_refinement': True,
'locref_huber_loss': True,
'locref_loss_weight': 0.05,
'locref_stdev': 7.2801,
'log_dir': 'log',
'lr_init': 0.0005,
'max_input_size': 1500,
'mean_pixel': [123.68, 116.779, 103.939],
'metadataset': 'training-datasets\iteration-0\UnaugmentedDataSet_TESTSep11\Documentation_data-TEST_80shuffle1.pickle',
'min_input_size': 64,
'mirror': False,
'multi_stage': False,
'multi_step': [[0.001, 5]],
'net_type': 'efficientnet-b0',
'num_joints': 4,
'optimizer': 'sgd',
'pairwise_huber_loss': False,
'pairwise_predict': False,
'partaffinityfield_predict': False,
'pos_dist_thresh': 17,
'project_path': 'C:\works\DLC\DLC_script\DeepLabCut-master\DeepLabCut-master\examples\TEST-Alex-2022-09-11',
'regularize': False,
'rotation': 25,
'rotratio': 0.4,
'save_iters': 5,
'scale_jitter_lo': 0.5,
'scale_jitter_up': 1.25,
'scoremap_dir': 'test',
'shuffle': True,
'snapshot_prefix': 'C:\works\DLC\DLC_script\DeepLabCut-master\DeepLabCut-master\examples\TEST-Alex-2022-09-11\dlc-models\iteration-0\TESTSep11-trainset80shuffle1\train\snapshot',
'stride': 8.0,
'weigh_negatives': False,
'weigh_only_present_joints': False,
'weigh_part_predictions': False,
'weight_decay': 0.0001}
Batch Size is 1
C:Appanaconda3envsDEEPLABCUTlibsite-packagestensorflowpythonkerasenginebase_layer_v1.py:1694: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
warnings.warn('`layer.apply` is deprecated and '
2022-09-11 20:18:35.578698: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-09-11 20:18:35.897924: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2022-09-11 20:18:35.898044: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9616 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060, pci bus id: 0000:01:00.0, compute capability: 8.6
Loading ImageNet-pretrained efficientnet-b0
2022-09-11 20:18:36.209153: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9616 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060, pci bus id: 0000:01:00.0, compute capability: 8.6
Switching to cosine decay schedule with adam!
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:Appanaconda3envsDEEPLABCUTlibthreading.py", line 932, in _bootstrap_inner
self.run()
File "C:Appanaconda3envsDEEPLABCUTlibthreading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:Appanaconda3envsDEEPLABCUTlibsite-packagesdeeplabcutpose_estimation_tensorflowcoretrain.py", line 81, in load_and_enqueue
batch_np = dataset.next_batch()
File "C:Appanaconda3envsDEEPLABCUTlibsite-packagesdeeplabcutpose_estimation_tensorflowdatasetspose_imgaug.py", line 404, in next_batch
scmap_update = self.get_scmap_update(
File "C:Appanaconda3envsDEEPLABCUTlibsite-packagesdeeplabcutpose_estimation_tensorflowdatasetspose_imgaug.py", line 361, in get_scmap_update
) = self.compute_target_part_scoremap_numpy(
File "C:Appanaconda3envsDEEPLABCUTlibsite-packagesdeeplabcutpose_estimation_tensorflowdatasetspose_imgaug.py", line 498, in compute_target_part_scoremap_numpy
j_x = np.asscalar(joint_pt[0])
File "C:Appanaconda3envsDEEPLABCUTlibsite-packagesnumpy__init__.py", line 311, in __getattr__
raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute 'asscalar'
2022-09-11 20:18:38.298353: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:354] MLIR V1 optimization pass is not enabled
Training parameter:
{'stride': 8.0, 'weigh_part_predictions': False, 'weigh_negatives': False, 'fg_fraction': 0.25, 'mean_pixel': [123.68, 116.779, 103.939], 'shuffle': True, 'snapshot_prefix': 'C:\works\DLC\DLC_script\DeepLabCut-master\DeepLabCut-master\examples\TEST-Alex-2022-09-11\dlc-models\iteration-0\TESTSep11-trainset80shuffle1\train\snapshot', 'log_dir': 'log', 'global_scale': 0.8, 'location_refinement': True, 'locref_stdev': 7.2801, 'locref_loss_weight': 0.05, 'locref_huber_loss': True, 'optimizer': 'adam', 'intermediate_supervision': False, 'intermediate_supervision_layer': 12, 'regularize': False, 'weight_decay': 0.0001, 'crop_pad': 0, 'scoremap_dir': 'test', 'batch_size': 1, 'dataset_type': 'default', 'deterministic': False, 'mirror': False, 'pairwise_huber_loss': False, 'weigh_only_present_joints': False, 'partaffinityfield_predict': False, 'pairwise_predict': False, 'all_joints': [[0], [1], [2], [3]], 'all_joints_names': ['bodypart1', 'bodypart2', 'bodypart3', 'objectA'], 'alpha_r': 0.02, 'apply_prob': 0.5, 'contrast': {'clahe': True, 'claheratio': 0.1, 'histeq': True, 'histeqratio': 0.1, 'gamma': False, 'sigmoid': False, 'log': False, 'linear': False}, 'convolution': {'edge': False, 'emboss': {'alpha': [0.0, 1.0], 'strength': [0.5, 1.5]}, 'embossratio': 0.1, 'sharpen': False, 'sharpenratio': 0.3}, 'cropratio': 0.4, 'dataset': 'training-datasets\iteration-0\UnaugmentedDataSet_TESTSep11\TEST_Alex80shuffle1.mat', 'decay_steps': 30000, 'display_iters': 2, 'init_weights': 'C:\App\anaconda3\envs\DEEPLABCUT\lib\site-packages\deeplabcut\pose_estimation_tensorflow\models\pretrained\efficientnet-b0\model.ckpt', 'lr_init': 0.0005, 'max_input_size': 1500, 'metadataset': 'training-datasets\iteration-0\UnaugmentedDataSet_TESTSep11\Documentation_data-TEST_80shuffle1.pickle', 'min_input_size': 64, 'multi_stage': False, 'multi_step': [[0.001, 5]], 'net_type': 'efficientnet-b0', 'num_joints': 4, 'pos_dist_thresh': 17, 'project_path': 'C:\works\DLC\DLC_script\DeepLabCut-master\DeepLabCut-master\examples\TEST-Alex-2022-09-11', 'rotation': 25, 'rotratio': 0.4, 'save_iters': 5, 'scale_jitter_lo': 0.5, 'scale_jitter_up': 1.25, 'covering': True, 'elastic_transform': True, 'motion_blur': True, 'motion_blur_params': {'k': 7, 'angle': (-90, 90)}, 'use_batch_norm': False, 'use_drop_out': False}
Starting training....
2
Answers
I suspect two issues here:
np.asscalar
isn’t found. Your numpy version is1.23.3
, butnp.asscalar
is deprecated since1.16
. Maybe try downgrading (pip install numpy==1.15
/conda install numpy==1.15
) and see if the error persists.Edit: I just checked the config file supplied by DLC and verified that no numpy version is specified. You should probably downgrade to a version <
1.16
sincenp.asscalar
is used.(Update)
This is now fixed in the repository (PR #1982). If working from a
git clone
version of the DeepLabCut repository, then run agit pull
to grab the latest version. Otherwise, they’ll probably drop a release soon with the fix integrated.(Original Answer)
The
numpy.asscalar()
method was finally removed in NumPy 1.23 (see Release Notes) after being deprecated since v1.16. I added an Issue to the repository. Unless you want to send in a Pull Request to fix it, downgrade the Numpy to 1.22 or below.Consider using Mamba
BTW, no one should be waiting for slow solves anymore – Mamba has been stable for a long time and solved this issue. Once installed, just use the word
mamba
instead ofconda
for most commands.Edit the YAML
Alternatively, edit the YAML to include the upper bound on
numpy
:and recreate the environment from the updated YAML.