I’m Soroush Omranpour, a computer engineering graduate from Sharif university of Technology. Currently, I’m working on conversational AI as a deep learning engineer at ZLab Fanap co. Also I’m working on symbolic music information retrieval as a research intern at Music AI lab under supervision of Yi-Hsuan Yang.
I started my research career at Data science and Machine learning Lab (directed by prof. Hamid R. Rabiee) at Sharif University with a project on “Fake news detection on social media” under supervision of Maryam Ramezani. You can take a look at our paper here. Throughout this project, I implemented various baselines for fake news detection such as CSI model in Pytorch.
Recently, I defended my bachelor thesis on “Behavioral Analysis of Developers’ Activities in Group Projects”. The main idea behind this analysis is using integrated information as a group metric to evaluate team work and its effect on product quality. Since there was no easy-to-use piece of code available to calculate integrated information I developed a python package called “Social_phi” accessible from here and PyPI.
I did an internship at IST Austria under supervision of prof. Christoph Lampert and Amelie royer on “Abstract reasoning in neural networks”. The main idea behind that project was to solve Raven matrices with a deep network. The main challenge was to generate the answer image instead of choosing from a set of candidates. Here is a short presentation of the work.
Beyond my deep passion for music composition and performance, I’m always eager to do music-related research in the computer science domain. So I started a research internship at Music and AI lab under supervision of Prof. Yi-Hsuan Yang on symbolic music generation with transformers. I developed three python packages related to this field so far:
Also you can listen to some samples generated from scratch by one of my models here.
As a deep learning engineer and product manager at ZLab I was responsible for persian speech recognition project (called Chista). This project had two main parts:
Another aspect of a smart assistant pipeline is a text-to-speech system. Thanks to the transformer architecture and the advances it has made to the natural language processing domain so far, speech synthesis became pretty straight forward recently. In this matter, I implemented a version of the Transformer-TTS in pytorch.
I’m currently involved in a semantic search project for an online shop. This project aims to make major improvements over the classic keyword-based document retrieval system by using latent distances on a semantic space to overcome paraphrasing problem in the search engine.
I made a Telegram bot to talk to when I’m bored. This bot has a GPT2 language model pretrained on Persian corpora and finetuned on my personal telegram chats! It was like a cybercopy of me.
Its a transformer language model (currently GPT2) pretrained on Persian corpora and finetuned on a dataset of Persian poems I crawled from Ganjoor website. This model supports generating poems in the style of 76 famous Persian poets. For more details and samples please visit this repo.
I wrote a simple synthesizer/loop module in python using Pyo package. I named it after one my favorite figures of music history “Zaryab”. This piece of code was used to make a loop station instrument out of a raspberry pi 4 and a bunch of vibration sensors.
I make some Lo-Fi chill music (with my Arturia midi keyboard and GarageBand) when I’m alone. You can listen to some on my soundcloud.