As data-gathering technologies and data sources witness an explosion in the amount of input data, it is expected that in the future massive quantities of data in the order of hundreds or thousands of petabytes will need to be processed. Thus, it is critical that distributed computing systems for Big Data and AI (such as Hadoop, Spark, Flink, TensorFlow, PyTorch, etc.) are diligently designed, with high performance and scalability, in order to meet the growing demands of such Big Data applications.
PADSYS Lab members design and develop high-performance Big Data and AI systems and libraries for the emerging HPC and datacenter architectures. Our proposed designs and research studies aim to bring HPC, Big Data processing, AI, and Cloud Computing into a convergent trajectory.
Thanks a lot for NSF's support on this research direction!
BIGDATA: F: DKM: Collaborative Research: Scalable Middleware for Managing and Processing Big Data on Next Generation HPC Systems. Prof. Xiaoyi Lu was one of the senior personel on this grant.
High-Performance Big Data (HiBD). Prof. Xiaoyi Lu was the co-founder and R&D leader for this project when he worked at NOWLAB@OSU. Some of his students had contributed a lot of effort and time on this project as well.
RDMA-TensorFlow. Prof. Xiaoyi Lu was the co-founder and R&D leader for this project when he worked at NOWLAB@OSU.