QUETRA: A Queuing Theory Approach to DASH Rate Adaptation

by

Investigator: Wei Tsang Ooi.

As we race forward into the digital age, and with the increasing capacity of computers to process vast amounts of data for a variety of complex tasks, the potential benefits and risks of artificial intelligence (AI) have never been more salient. There are many advantages to be afforded through AI applications. Intelligent systems can assist in vehicle automation, medical diagnosis, and customer product recommendation, to name just a few uses.

At the same time, the negative consequences of these technologies are widespread. The world is reeling from the effects of fake news being generated by manipulating algorithms that recommend news to social media users. Companies are being accused of emotional persuasion of customers to sell them unwanted products and make them addicted to social media. Financial loan and job recommendation algorithms are being seen as biased against particular demographic groups. Critical infrastructure systems are being compromised by sophisticated security attacks launched through bots. Personal data of users is being collected in large amounts and being leaked on a regular basis, with algorithms being able to mash data from multiple sources to identify individuals’ private information. And the risks of job loss due to automation of many manufacturing and service tasks are becoming a daily topic of discussion.

These risks raise critical issues regarding the fairness, safety, privacy, liability, and ethics of the design, development and use of AI systems and related data. Challenges in AI and data use governance start right from when AI systems are designed and built, to how the data is stored, processed, shared, and finally used i.e., through the entire data life cycle. For instance, it is non-trivial to detect and encode human principles and values of fairness, accountability, and transparency into the design of new AI software and hardware. Challenges also exist in building safety requirements into AI systems, particularly critical systems such as for intensive care, e-transactions, and public utilities management. These systems must be robust, transparent, secure, and have oversight, such that critical services do not fail and privacy/rights of users are not violated. Thus the need for guidelines and techniques for developers to design and build systems that incorporate such requirements upfront is crucial.

Multiple challenges also exist in sharing data among relevant stakeholders e.g., companies and public agencies, whether in terms of data formats or privacy and legal issues. Yet the benefits to be obtained from data sharing can be huge, necessitating urgent solutions to these challenges. This work involves designing appropriate frameworks for data governance and data sharing in such contexts.

You may also like

Copyright @2023 – All Right Reserved, NUS AI Lab

COM1, 13 Computing Drive Singapore 117417