Categories
Misc

[Help] How to optimize posenet or handpose javascript?

I’m working on an experiment where users can interact with a 3D object using their gestures or hands. Posenet/Handpose is a great library, but the performance is not up to par just yet, without any 3d object the frame rate hovers around 10-12FPS which is not enough if you want to build an interactive installation.

Is there a way to optimize this, especially on macOS?

I’ve tried the following;

  • Using web worker (didn’t help much)
  • Using WebSocket and run TensorFlow on the server (Didn’t help much, because I can’t run the GPU backend)

What I haven’t tried.

  • Run a TPU server, a bit excessive and perhaps costly? Or is there an alternative for this?
  • Run it on an Nvidia platform (Might need to rent)

submitted by /u/buangakun3
[visit reddit] [comments]

Categories
Misc

A package to sizeably boost your performance

A package to sizeably boost your performance

I am glad to present the TensorFlow implementation of “Gradient Centralization” a new optimization technique to sizeably boost your performance πŸš€, available as a ready-to-use Python package!

​

Project Repo: https://github.com/Rishit-dagli/Gradient-Centralization-TensorFlow

​

Please consider giving it a ⭐ if you like it😎. Here is an example showing the impact of the package!

https://preview.redd.it/69woozdxjui61.png?width=1280&format=png&auto=webp&s=0f3acbaf28a0dbc05455e1633eee9a82a95dae17

submitted by /u/Rishit-dagli
[visit reddit] [comments]

Categories
Misc

Tensorflow Tutorials for Audio Processing? Preferably Seperating Different Types of Audio

I have a dataset that contains a lot of audio files that I want Tensorflow to differentiate from. (For example, this audio is a women counting numbers, this audio is just plain static)

I have no idea where to start and googling just gives me some specific datasets that are used to differentiate songs.

What is a good place to start for audio processing?

submitted by /u/TuckleBuck88
[visit reddit] [comments]

Categories
Misc

Why loss values don’t make sense for Dice, Focal, IOU for boundary detection Unet in Keras?

I am using Keras for boundary/contour detection using a Unet. When I use binary cross-entropy as the loss, the losses decrease over time as expected the predicted boundaries look reasonable

However, I have tried custom losses for Dice, Focal, IOU, with varying LRs, and none of them are working well. I either get NaNs or non-decreasing/barely-decreasing values for the losses. This is regardless of what I use for the LR, whether it be .01 to 1e-6, or whether I vary the ALPHA and GAMMA and other parameters. This doesn’t make sense since for my images, most of the pixels are the background, and the pixels corresponding to boundaries are the minority. For imbalanced datasets, IOU, Dice, and Focal should work better than binary Cross-Entropy

The code I used for the losses are from https://www.kaggle.com/bigironsphere/loss-function-library-keras-pytorch#Jaccard/Intersection-over-Union-(IoU)-Loss

def DiceLoss(targets, inputs, smooth=1e-6): #flatten label and prediction tensors inputs = K.flatten(inputs) targets = K.flatten(targets) intersection = K.sum(K.dot(targets, inputs)) dice = (2*intersection + smooth) / (K.sum(targets) + K.sum(inputs) + smooth) return 1 - dice ALPHA = 0.8 GAMMA = 2 def FocalLoss(targets, inputs, alpha=ALPHA, gamma=GAMMA): inputs = K.flatten(inputs) targets = K.flatten(targets) BCE = K.binary_crossentropy(targets, inputs) BCE_EXP = K.exp(-BCE) focal_loss = K.mean(alpha * K.pow((1-BCE_EXP), gamma) * BCE) return focal_loss def IoULoss(targets, inputs, smooth=1e-6): #flatten label and prediction tensors inputs = K.flatten(inputs) targets = K.flatten(targets) intersection = K.sum(K.dot(targets, inputs)) total = K.sum(targets) + K.sum(inputs) union = total - intersection IoU = (intersection + smooth) / (union + smooth) return 1 - IoU 

Even if I try different code for the losses, such as the code below

smooth = 1. def dice_coef(y_true, y_pred): y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth) def dice_coef_loss(y_true, y_pred): return -dice_coef(y_true, y_pred) 

the loss values still don’t improve. That is, it will show something like

loss: nan - dice_coef_loss: .9607 - val_loss: nan - val_dice_coef_loss: .9631 

and the values won’t change much for each epoch

can anyone help?

submitted by /u/74throwaway
[visit reddit] [comments]

Categories
Misc

ImportError: cannot import name ‘model_lib_v2’ from ‘object_detection’ (C:UsersASUSAppDataRoamingPythonPython38site-packagesobject_detection__init__.py)

ImportError: cannot import name 'model_lib_v2' from 'object_detection' (C:UsersASUSAppDataRoamingPythonPython38site-packagesobject_detection__init__.py)

Object detection api import error

This was what i managed to do succesfully. Basicallly followed instructions from here Gilbert Tanner github.

git clone https://github.com/tensorflow/models.git

cd models/research

# Compile protos.

protoc object_detection/protos/*.proto –python_out=.

# Install TensorFlow Object Detection API.

i manually copied setup.py from packages/tf2 folder to object_detection/packages/tf2/setup.py

python -m pip install .

All these ran smoothly without warning messages.

I tested it with

python -c “import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000,1000])))”
and it works fine i got the printed output without any error.

python generate_tfrecord.py –csv_input=images/train_labels.csv –image_dir=images/train –output_path=train.record
works as well i got the train.record file.

However, when i tried to Training the model with

python model_main_tf2.py –pipeline_config_path=training/ssd_efficientdet_d0_512x512_coco17_tpu-8.config –model_dir=training –alsologtostderr

this came up.

https://preview.redd.it/aqkt00tzmki61.png?width=1167&format=png&auto=webp&s=3b2e296fc9b7282d9d604459a178a1e6fd7826b9

submitted by /u/slpypnda
[visit reddit] [comments]

Categories
Misc

layer doesn’t create weights when given input shape

In The Sequential model | TensorFlow Core section Specifying the input shape in advance it says:

Generally, all layers in Keras need to know the shape of their inputs in order to be able to create their weights. So when you create a layer like this, initially, it has no weights:

“`python layer = layers.Dense(3) print(layer.weights) # Empty

Output

[] “`

But if I specify the input_shape, why the output is still empty?

“`python layer = layers.Dense(2, input_shape=(4,))

print(layer.weights) # Still empty “`

submitted by /u/Ynjxsjmh
[visit reddit] [comments]

Categories
Misc

I taught a TensorFlow model to tell if one is not wearing a mask. It even gets angry when the mask is covering the chin and not the nose πŸ˜… Runs as a WebAssembly application entirely inside the browser. No server interaction / storage whatsoever.

I taught a TensorFlow model to tell if one is not wearing a mask. It even gets angry when the mask is covering the chin and not the nose πŸ˜… Runs as a WebAssembly application entirely inside the browser. No server interaction / storage whatsoever. submitted by /u/preslavrachev
[visit reddit] [comments]
Categories
Misc

Cartesian product of tensors with batch dimension in Tensorflow

Hi, i’m a tensorflow noob, I would appreciate it if you could help me with this. I have 3 tensors of shape (?,15) and I want to get the Cartesian product of all the possible combinations, my output should be of dimension (?,15,15,15,3). How can I do it ?

submitted by /u/holypapa96
[visit reddit] [comments]

Categories
Misc

New Training Opportunities Now Available Worldwide from NVIDIA Deep Learning Institute Certified Instructors

For the first time ever, the NVIDIA Deep Learning Institute is making its popular instructor-led workshops available to the general public. With the launch of public workshops this week, enrollment will be open to individual developers, data scientists, researchers and students. NVIDIA is increasing accessibility and the number of courses available to participants around the Read article >

The post New Training Opportunities Now Available Worldwide from NVIDIA Deep Learning Institute Certified Instructors appeared first on The Official NVIDIA Blog.

Categories
Misc

[Overview] MLOps: What It Is, Why it Matters, and How To Implement it

Both legacy companies and many tech companies doing commercial ML have pain points regarding:

  • Moving to the cloud,
  • Creating and managing ML pipelines,
  • Scaling,
  • Dealing with sensitive data at scale,
  • And about a million other problems.

At the same time, if we want to be serious and actually have models touch real-life business problems and real people, we have to deal with the essentials like:

  • acquiring & cleaning large amounts of data;
  • setting up tracking and versioning for experiments and model training runs;
  • setting up the deployment and monitoring pipelines for the models that do get to production.
  • and we need to find a way to scale our ML operations to the needs of the business and/or users of our ML models.

This article gives you broad overview on the topic:

What is MLOps

submitted by /u/kk_ai
[visit reddit] [comments]