Photo by Ajda ATZ on Unsplash

Hello readers,

In this article, I have demonstrated the CRUD operations on MongoDB and performed the operations in both Synchronous and Asynchronous ways.

Basic knowledge of Node.js is appreciated.

NoSQL database used : MongoDB

Table of Contents:

  1. Introduction
  2. Setting up MongoDB (NoSQL)
  3. Synchronous Vs Asynchronous Execution
  4. Creating a Schema, Model, Object
  5. CREATE (Synchronous)
  6. CREATE (Asynchronous)
  7. READ (Synchronous)
  8. READ (Asynchronous)
  9. UPDATE (Synchronous)
  10. UPDATE (Asynchronous)
  11. DELETE (Synchronous)
  12. UPDATE (Asynchronous)
  13. Resources & References

1. Introduction

CRUD is an acronym for CREATE, READ, UPDATE & DELETE, which are rudimentary operations in the database.

In this post, I have used MongoDB as my primary database, which is a NoSQL database…

Photo by John Salvino on Unsplash

For the purpose of obfuscation, we will be using a python package called pyarmor.

We might sometimes face a situation where we need to provide code directly to a client for obvious reasons, but by doing so, we will lose control of the code. In such cases, we might we encrypt the codes to protect it, retain control and add some fallback condition to control our dependency, just like if we provided code for using only for a certain amount of time.

To address the above issues, I will be demonstrating a simple function with the above capabilities. …

Step by step instructions to bind OpenCV libraries with CUDA drivers to enable GPU processing on OpenCV codes.

Photo by Christian Wiediger on Unsplash

By default, there is no need to enable OpenCV with CUDA for GPU processing, but during production, when you have heavy OpenCV manipulations to do on image/video files, we can make use of the OpenCV CUDA library to make those operations to run on GPU rather than CPU and it saves a lot of time.

It was not easy as it is said to connect the OpenCV library to enable it with CUDA, I had to go through a painful process for a week to establish the connection properly, also its both time & money consuming process. …

In this post, we will be building an LSTM based Seq2Seq model with the Encoder-Decoder architecture for machine translation without attention mechanism.

Table of Contents:

  1. Introduction
  2. Data Preparation and Pre-processing
  3. Long Short Term Memory (LSTM) — Under the Hood
  4. Encoder Model Architecture (Seq2Seq)
  5. Encoder Code Implementation (Seq2Seq)
  6. Decoder Model Architecture (Seq2Seq)
  7. Decoder Code Implementation (Seq2Seq)
  8. Seq2Seq (Encoder + Decoder) Interface
  9. Seq2Seq (Encoder + Decoder) Code Implementation
  10. Seq2Seq Model Training
  11. Seq2Seq Model Inference
  12. Resources & References

1. Introduction

Neural machine translation (NMT) is an approach to machine translation that uses an artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.

It was one of the hardest problems for computers to translate from one language to another…

Recreating the oldest Neural Network Architecture.


I am starting a series of posts in medium covering most of the CNN architectures and implemented in PyTorch and TensorFlow. I believe after getting our hands-on with the standard architectures, we will be ready to build our own custom CNN architectures for any task.

So I am starting with the oldest CNN architecture LeNet(1998). It was primarily developed for the recognition of handwritten and other characters.

The architecture has a total of 7 layers consisting of 2 sets of Convolution layers, and Average pooling layers which are followed by a flattening convolution layer…

Machine Learning

Illustration by Author


On a lighter note, the embedding of a particular word (In Higher Dimension) is nothing but a vector representation of that word (In Lower Dimension). Where words with similar meaning Ex. “Joyful” and “Cheerful” and other closely related words like Ex. “Money” and “Bank”, gets closer vector representation when projected in the Lower Dimension.

The transformation from words to vectors is called word embedding

So the underlying concept in creating a mini word embedding boils down to train a simple Auto-Encoder with some text data.

Some Basics :

Before we proceed to our creation of mini word embedding, it’s good to brush up…

Deep Learning

Implementing rudimentary to advanced operations on deep learning’s fundamental units.


I am accustomed to creating new deep learning architectures for different problems, but which framework (Keras, Pytorch, TensorFlow) to choose is often harder.

Since there’s an uncertainty in it, it’s good to know the fundamental operations on those framework’s fundamental units (NumPy, Torch, Tensor).

In this post, I have performed a handful of the same operations across the 3 frameworks, also tried my hands on visualization for most of them.

This is a beginner-friendly post, so let’s get started.

1. Installation

2. Version Check

3. Array Initialization ~ 1-D, 2-D, 3-D

Computer Vision

Step by step instructions to train Yolo-v5 & do Inference(from ultralytics) to count the blood cells and localize them.

I vividly remember that I tried to do an object detection model to count the RBC, WBC, and platelets on microscopic blood-smeared images using Yolo v3-v4, but I couldn’t get as much as accuracy I wanted and the model never made it to the production.

Now recently I came across the release of the Yolo-v5 model from Ultralytics, which is built using PyTorch. …

Hey Everyone, in this post we will be familiarising ourselves about using Git and GitHub.

Git — > Git is a version control system.

It allows you to record different versions of your project and also allows you to go back in time to check previous versions of your project.

Machine and Deep Learning Engineer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store