The International Symposium on Foundation and Large Language Models (FLLM2023)

21-24 November, 2023 | Abu Dhabi, UAE
Colocated with
The 10th International Conference on Social Networks Analysis, Management and Security(SNAMS-2023)

KEYNOTE SPEAKERS

Abhimanyu Mukerji
Amazon Inc. USA



Title: Causal Inference, Machine Learning and Social Networks

Abstract

This talk will discuss the problem of causal inference, and cutting-edge approaches to address it. We will focus on the use of machine learning and deep learning in this area, and emphasize applications and complexity in social and other networks. We will also discuss strategies to validate estimates and methods for implementing such models at scale.

Biography:

Abhimanyu is an Economist at Amazon working on dynamic causal models and causal machine learning. His prior research has used methods from machine learning, deep learning and natural language processing combined with econometric approaches to study problems in applied microeconomics and empirical corporate finance. He holds a PhD in financial economics from Stanford University.

 


Prof. Juyang Weng, IEEE Life Fellow
Brain-Mind Institute and GENISAMA, USA



Title:The First Conscious Learning Algorithm Avoids “Deep Learning” Misconduct

Abstract

From a fruit fly to a human, with many animal species in between, do they share a set of biological mechanisms to regulate the lifelong development of the brains? We have seen very impressive advances in understanding the principles of neuroscience. However, what is still missing is a holistic algorithm that is both broad and deep. By broad, we mean it approximates such mechanisms across a range of species. By deep, we mean that it specifies sufficient details so that the algorithm can be biologically and computationally verified and possibly corrected across a deep hierarchy of scales, from neurotransmitters, to cells, to brain patterns, to behaviors, to intelligence, to consciousness across the time span of a life. This talk outlines such a conscious learning algorithm, the first in the category as far as the presenter is aware of, called Developmental Network 3 (DN-3). All its predecessors, Cresceptron, IHDR, DN-1 and DN-2 were not capable of conscious learning till DN-3. A major extension from DN-2 to DN-3 is that the model starts from a single cell inside the skull so that brain patterning is fully automatic in a coarse to fine way. This biological model has been supported by computational experiments with real sensory data for vision, audition, natural languages, and planning, to be presented during the talk. This first ever algorithm for conscious learning is free from “deep learning” misconduct, including ChatGPT.

Biography:

Prof. Juyang Weng received the BS degree from Fudan University, in 1982, M. Sc. and PhD degrees from the University of Illinois at Urbana-Champaign, in 1985 and 1989, respectively, all in computer science. He is a former faculty member of Department of Computer Science and Engineering, faculty member of the Cognitive Science Program, and faculty member of the Neuroscience Program at Michigan State University, East Lansing. He was a visiting professor at the Computer Science School of Fudan University, Nov. 2003 - March 2014, and did sabbatical research at MIT, at Media Lab Fall 1999 – Spring 2000; and at Department of Brain and Cognitive Science Fall 2006-Spring 2007 and taught BCS9.915/EECS6.887 Computational Cognitive and Neural Development during Spring 2007. Since the work of Cresceptron (ICCV 1993) the first deep learning neural networks for 3D world without post-selection misconduct, he expanded his research interests in biologically inspired systems to developmental learning, including perception, cognition, behaviors, motivation, machine thinking, and conscious learning models. He has published over 300 research articles on related subjects, including task muddiness, intelligence metrics, brain-mind architectures, emergent Turing machines, autonomous programing for general purposes (APFGP), Post-Selection flaws in “deep learning”, vision, audition, touch, attention, detection, recognition, autonomous navigation, and natural language understanding. He published with T. S. Huang and N. Ahuja a research monograph titled Motion and Structure from Image Sequences. He authored a book titled Natural and Artificial Intelligence: Computational Introduction to Computational Brain-Mind. Dr. Weng is an Editor-in-Chief of the International Journal of Humanoid Robotics, the Editor-in-Chief of the Brain-Mind Magazine, and an associate editor of the IEEE Transactions on Autonomous Mental Development (now Cognitive and Developmental Systems). With others’ support, he initiated the series of International Conference on Development and Learning (ICDL), the IEEE Transactions on Autonomous Mental Development, the Brain-Mind Institute, and the startup GENISAMA LLC. He was an associate editor of the IEEE Transactions on Pattern Recognition and Machine Intelligence and the IEEE Transactions on Image Processing.

 


Prof. Abdallah Khreishah
New Jersey Institute of Technology, USA



Title:How to secure machine intelligence? Emerging attacks and defenses

Abstract

Over the past three decades most research efforts in security and privacy have focused on network and storage security. Recently, Deep Neural Network (DNN) classifiers gain wide adoption in different complex tasks, including natural language processing, computer vision and cyber security. However, the underlying assumption of attack free operating environment has been defied by the introduction of several attacks such as adversarial examples and Trojan backdoor attacks. In Adversarial attacks the adversary perturbs the input examples during inference to force the DNN to misclassify while the adversary in the Trojan Backdoor operates in both training and inference phases. In the training phase the adversary trains the DNN in a way such that it behaves normally when the Trojan trigger does not exist, and it misclassifies if the trigger exists. Given that only the adversary knows the trigger, the users of the DNN will be fooled to trust the DNN model. The adversary can now attach the Trigger to the input examples during inference causing the DNN model to misclassify. In this talk we will discuss our development of several computationally efficient defense approaches for the Adversarial attacks enabling real-time detection of the attack for the first time. We will also discuss our development of an adaptive black-box defense approach for the Trojan Backdoor attack that outperforms the state-of-the-art by studying the relationships among the prediction logits of the DNN. After that we will discuss our recent follow up work in which we show how to jointly combine the above two adversaries to practically launch a new stealthy attack, dubbed AdvTrojan. AdvTrojan is stealthy because it can be activated only when: 1) a carefully crafted adversarial perturbation is injected into the input examples during inference, and 2) a Trojan backdoor is implanted during the training process of the model. We leverage adversarial noise in the input space to move Trojan-infected examples across the model decision boundary, making it difficult to detect. The stealthiness behavior of AdvTrojan fools the users into accidentally trusting the infected model as a robust classifier against adversarial examples. We will also discuss our future research that is focused on expanding the attack and defense mechanisms to new areas such as malicious domain detection, federated learning setting, personalized federated learning, and Graph Neural Networks. We will also discuss several application domains of adversarial as well as Trojan Backdoor attacks.

Biography:

Abdallah Khreishah received his Ph.D and M.S. degrees in Electrical and Computer Engineering from Purdue University in 2010 and 2006, respectively. Prior to that, he received his B.S. degree with honors from Jordan University of Science & Technology in 2004. During the last year of his Ph.D, he worked with NEESCOM. In Fall 2012, he joined the Electrical and Computer Engineering department of NJIT as an Assistant Professor and was promoted to Associate Professor in 2017 and Full Professor in 2023. He currently leads the engineering center for distributed machine intelligence at NJIT. His research spans the areas of machine learning, adversarial machine learning, wireless networks, visible-light communication, vehicular networks, and cloud & edge computing. He was involved in research projects totaling more than $15M funded by several agencies such as the National Science Foundation of US, The US Department of Defense, New Jersey Department of Transportation, and the State of New Jersey. He won several awards such as the best presentation award in INFOCOM 2018, the best paper award of ACM GLSVLSI 2023, the best paper award of SDS 2022, a distinguished TPC member of IEEE Infocom 2021, and the best symposium organization award from IWCMC 2018. He is currently serving as an associate editor for several International Journals including IEEE/ACM Transactions on Networking. He served as the TPC chair for WASA 2017, IEEE SNAMS 2014, IEEE SDS -2014, BDSN-2015, BSDN 2015, IOTSMS-2105. He has also served on the TPC committee of several international conferences, such as IEEE INFOCOM. He has mentored several PhD students who currently hold leading positions in academia as well as the industry. He is a senior member of IEEE and the IEEE EMBS North Jersey chapter chair.