-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ask for Suggestions about Research Topics #151
Comments
@Vul4Vendetta , Thanks for your interest in this work! I think it depends on where you want to contribute to. But first, I'm going to simply assume that you're interested in how confidential computing makes AI more secure as you said you are an AI researcher. (If it's not, please let us know) What ISLET ultimately wants to do is build a platform on top of ARM CCA, we're seeing AI stuff as one of key use cases that benefit from ISLET for security. In this view, the straightforward way to contribute might be finding and implementing a new AI-related use case and integrating into ISLET as an example (like code prediction demo). But this feels like just engineering stuff and not having a research impact. So, if you want something more about research work (e.g., your goal is to write a research paper in which ISLET is used to realize your idea), adversarial AI would be a good point IMO. More technically, it means coming up with a better idea to tackle the current problem of adversarial AI, by utilizing ISLET (+ Certifier framework if needed). I think this would be a better way to contribute as your team is a research organization. (But we have to admit ISLET is currently not that mature to be used this way) This is a kind of too simplified suggestion. You may or may not build some concrete idea from my suggestion. If you want us to develop a more concrete idea together, we're willing to help. To do so, it would be helpful to hear from you guys more about what you're interested in. |
Thanks for your reply. |
Thanks for your reply. My team mainly focuses on the AI security, including robust AI, AI backdoor, privacy protecting, and so on. The techniques we use are like AI designing or training designing. As you can see, we start from the AI side, but ISLET seems like an protecting-environment method, and that is where we find difficult to get involved. Any insights in terms of that? |
@Vul4Vendetta , If you're interested in eliminating security problems in FL with the aid of ISLET (confidential computing in more general), I think I can give you two suggestions, In a short term, you can get our confidential-ml demo better by demonstrating that some inference attacks can be defeated by ISLET (in collaboration with Certifier framework).
In a long run, it would be a good topic to figure out ways to eliminate backdoor attacks in FL, by using ISLET, more generally speaking, using confidential computing. Because, it's trivial to prevent inference attacks using ISLET but is hard to prevent backdoor attacks using ISLET. (more precisely, using confidential computing) I think this could be a research topic that might be formed as a full paper. As you may know, "differential privacy" is one way to mitigate this issue, but I don't think it's a perfect solution. |
@jinbpark Thanks, my teammates are discussing about FL and confidential computing. Some papers mentioned that the TEE won't be allocated too much memory to avoid the TCB being large. Under such constraints, when running AI, there will be a latency problem because of page swapping and decryption. So, it is a topic to research how to deal with it. Some of my teammates want to start from this, so may I ask if you have any interest in the ML execution with the limited memory size? |
@Vul4Vendetta , Of course. I know about the problem you mentioned. Some papers mentioned about the shortage of available memory capacity for on-device ML operations, particularly training. For example, PPFL (MobiSys 2021) addressed this problem by running a ML model on a per-layer basis inside ARM TrustZone. So the problems that you said would happen. The main reason why it causes frequent page swapping is that ARM TrustZone has a fixed portion of secure memory. (SGX too)
So, in this context, I'm sure that there will be no problem or significant performance downgrades in running a small model in ARM CCA, even for training. But, I think it would be a great research topic to deal with a large model in ARM CCA (ISLET), e.g., figure out the performance bottleneck in running a large model and how to optimize that. |
Hi, Thanks for your great work.
I am an AI researcher from Swinburne University of Technology in Australia. I watched your code prediction demo and want to conduct research related to ISLET. May I ask if there are any particular topics you care about, so maybe my team could contribute to your work.
Thanks again.
The text was updated successfully, but these errors were encountered: