Making these algorithms work for LLMs
If we run these algorithms “out-of-the-box” for LLMs, issues go badly. So, we got here up with optimizations to the algorithms that repair the important thing points with working them “out-of-the-box”.
For ELS, we needed to go from example-level DP ensures to user-level DP ensures. We discovered that earlier work was including orders of magnitude extra noise than was really obligatory. We have been capable of show that we are able to add considerably much less noise, making the mannequin significantly better whereas retaining the identical privateness ensures.
For each ELS and ULS, we had to determine the right way to optimize the contribution sure. A “default” selection is to decide on a contribution sure that each person already satisfies; that’s, we don’t do any pre-processing. Nonetheless, some customers could contribute a considerable amount of knowledge, and we might want to add massive quantities of noise to supply privateness to those customers. Setting a smaller contribution sure reduces the quantity of noise we have to add, however the price is having to discard a number of knowledge. As a result of LLM coaching runs are costly, we are able to’t afford to attempt coaching a bunch of fashions with totally different contribution bounds and choose the very best one — we’d like an efficient technique to choose the contribution sure earlier than we begin coaching.
After prolonged experimentation at scale, for ELS we discovered that setting the contribution sure to be the median variety of examples held by every person was an efficient technique. For ULS, we give a prediction for the full noise added as a operate of the contribution sure, and located that selecting the contribution sure minimizing this prediction was an efficient technique.