I've come to believe that the first question that these teams of well meaning people should ask about whatever it is they're about to build is:
"How will this thing be used against its intended purpose?"How will it be broken, hacked, manipulated, used to derail the good intention it was designed for? If the software is being designed to keep some people safe, how will those trying to do harm use it? If it's intended to protect privacy, how will it be used to expose or train attention in another dangerous way?
Think about it this way - every vulnerable community is vulnerable because some other set of communities and structures is making them that way. Your software probably doesn't (can't) address those oppressive or exploitative actors motives or resources. So when you deploy it it will be used in the continuing context of intentional or secondary harms.
If you can't figure out the ecosystem of safety belts and air bags, traffic rules, insurance companies, drivers' education, and regulatory systems that need to help make sure that whatever you build does more help than harm, ask yourself - are we ready for this? Because things will go wrong. And the best tool in the wrong hands makes things worse, not better.