IMPLEMENT THE SELF-ATTENTION MECHANISM IN PYTORCH

Python implementation of the self (scaled-dot product) attention mechanism originally proposed in “Attention Is All You Need”. Note that this is intended to be an executable and extended version of this article.

CHAPTER #3: FEATURE IMPORTANCE ON SAGEMAKER DATA WRANGLER

Hi folks! đź‘‹ In this post, I will explain to you how to use Data Wrangler to calculate feature importance on your data set. To get started, I choose to start a new data flow from the Amazon SageMaker Studio homepage.

CHAPTER #2: FEATURE IMPORTANCE WITH SHAP

Hi folks! 👋 In the previous post I talked about EDA and in particular statistical bias. Now it’s the turn of feature importance. What’s feature importance? Feature importance is the idea of explaining the individual features that make up a training data set, using a score called important score.

CHAPTER #1: WHAT IS THE STATISTICAL BIAS?

Hello there! 👋 As promised, I’ll begin this brand new series of posts with a fundamental sub-process of the EDA (i.e. Explanatory Data Analysis) that every aspiring Data Scientist has to face: deal with statistical bias.

INTRODUCTION

Hello there! 👋 Nice to meet you, I’m Lorenzo Balzani, a 23 years old motivated and enthusiastic student from Ravenna, Italy. I obtained my Computer Science and Engineering BSc from the University of Bologna in October 2021, with a dissertation in Artificial Intelligence and NLP.