金沙js6038线路通道

中文-金沙js6038线路通道-新
书记信箱 院长信箱 学校主页 English
  • 网站首页

  • 学院概况

    • 学院概况
    • 学院领导
    • 办公人员
    • 系所设置
  • 党建工作

    • 组织机构
    • 工作动态
    • 工作通知
    • 乡村振兴
  • 人才培养

    • 本科教学
    • 研究生培养
  • 学科科研

    • 学科建设
    • 科学研究
    • 合作交流
  • 师资队伍

    • 教授(研究员)
    • 副教授(副研究员,高级实验师)
    • 讲师(实验师,工程师)
    • 助教
    • 科研团队
    • 高层次人才引聘
    • 教师岗位引聘
    • 博士后流动站
    • 师德举报
  • 学生工作

    • 教育管理
    • 学生活动
    • 辅导员队伍
    • 就业服务
    • 办事指南
    • 资料下载
  • 校友之声

  • 文件下载

    • 人事行政
    • 本科教学
    • 研究生培养
    • 学科科研
  • 媒体机电

学术报告

    您所在位置: 网站首页 > 学术报告 > 正文
    Adversarial Machine Learning

    ——

    时间:2019-06-18来源: 作者:点击数:

    讲座名称

    Adversarial Machine Learning

    讲座时间

    2019-06-19 16:00:00

    讲座地点

    西电北校区主楼III-237报告厅

    讲座人

    Fabio Roli

    讲座人介绍

    Fabio Roli is a Full Professor of Computer Engineering at the University of Cagliari, Italy, and Director of the Pattern Recognition and Applications laboratory (http://pralab.diee.unica.it/). He is partner and R&D manager of the company Pluribus One that he co-founded (https://www.pluribus-one.it ). He has been doing research on the design of pattern recognition and machine learning systems for thirty years. His current h-index is 60 according to Google Scholar (June 2019). He has been appointed Fellow of the IEEE and Fellow of the International Association for Pattern Recognition. He was a member of NATO advisory panel for Information and Communications Security, NATO Science for Peace and Security (2008 – 2011).

    讲座内容

    Machine-learning algorithms are widely used for cybersecurity applications, including spam, malware detection, biometric recognition. In these applications, the learning algorithm has to face intelligent and adaptive attackers who can carefully manipulate data to purposely subvert the learning process. As machine learning algorithms have not been originally designed under such premises, they have been shown to be vulnerable to well-crafted, sophisticated attacks, including test-time evasion and training-time poisoning attacks (also known as adversarial examples). This talk aims to introduce the fundamentals of adversarial machine learning by a well-structured overview of techniques to assess the vulnerability of machine-learning algorithms to adversarial attacks (both at training and test time), and some of the most effective countermeasures proposed to date. We report application examples including object recognition in images, biometric identity recognition, spam and malware detection.

    上一条:Supervisory Control Design for Reconfiguration in Discrete Event Systems based on Automata 下一条:Advanced Topics in Supervisory Control for Discrete Event Systems based on Automata

    金沙js6038线路通道 - 金沙js6038会员中心 版权所有 Copyright©2009-2016 qyxsl.com ALL Rights Reserved   技术支持:西安聚力