For the purpose of obfuscation, we will be using a python package called pyarmor.
We might sometimes face a situation where we need to provide code directly to a client for obvious reasons, but by doing so, we will lose control of the code. In such cases, we might we encrypt the codes to protect it, retain control and add some fallback condition to control our dependency, just like if we provided code for using only for a certain amount of time.
To address the above issues, I will be demonstrating a simple function with the above capabilities. …
By default, there is no need to enable OpenCV with CUDA for GPU processing, but during production, when you have heavy OpenCV manipulations to do on image/video files, we can make use of the OpenCV CUDA library to make those operations to run on GPU rather than CPU and it saves a lot of time.
It was not easy as it is said to connect the OpenCV library to enable it with CUDA, I had to go through a painful process for a week to establish the connection properly, also its both time & money consuming process. …
Neural machine translation (NMT) is an approach to machine translation that uses an artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.
It was one of the hardest problems for computers to translate from one language to another…
Recreating the oldest Neural Network Architecture.
I am starting a series of posts in medium covering most of the CNN architectures and implemented in PyTorch and TensorFlow. I believe after getting our hands-on with the standard architectures, we will be ready to build our own custom CNN architectures for any task.
So I am starting with the oldest CNN architecture LeNet(1998). It was primarily developed for the recognition of handwritten and other characters.
The architecture has a total of 7 layers consisting of 2 sets of Convolution layers, and Average pooling layers which are followed by a flattening convolution layer…
On a lighter note, the embedding of a particular word (In Higher Dimension) is nothing but a vector representation of that word (In Lower Dimension). Where words with similar meaning Ex. “Joyful” and “Cheerful” and other closely related words like Ex. “Money” and “Bank”, gets closer vector representation when projected in the Lower Dimension.
The transformation from words to vectors is called word embedding
So the underlying concept in creating a mini word embedding boils down to train a simple Auto-Encoder with some text data.
Before we proceed to our creation of mini word embedding, it’s good to brush up…
Implementing rudimentary to advanced operations on deep learning’s fundamental units.
I am accustomed to creating new deep learning architectures for different problems, but which framework (Keras, Pytorch, TensorFlow) to choose is often harder.
Since there’s an uncertainty in it, it’s good to know the fundamental operations on those framework’s fundamental units (NumPy, Torch, Tensor).
In this post, I have performed a handful of the same operations across the 3 frameworks, also tried my hands on visualization for most of them.
This is a beginner-friendly post, so let’s get started.
Step by step instructions to train Yolo-v5 & do Inference(from ultralytics) to count the blood cells and localize them.
I vividly remember that I tried to do an object detection model to count the RBC, WBC, and platelets on microscopic blood-smeared images using Yolo v3-v4, but I couldn’t get as much as accuracy I wanted and the model never made it to the production.
Now recently I came across the release of the Yolo-v5 model from Ultralytics, which is built using PyTorch. …
Hey Everyone,
In this post, I will be sharing what is argparse and how to use them in command line arguments.
Before diving in, let’s look into a very simple program in python.
So the function add_and_display has 4 input arguments, a boolean value to control the output display, a description, and two numbers.
If we want to change any of the input arguments, then we manually modify the corresponding values in the code and as the code gets more complex, manipulating these values will become hard.
So one workaround is to use Command-line arguments. They are flags given…
Machine and Deep Learning Engineer