Ethically Embracing Artificial Intelligence (AI)
In a world where technological advancements often outpace our capacity to fully understand their implications, Affinity Bridge is committed to ethically embracing the benefits of Artificial Intelligence (AI) that are right for our organization and our values-aligned clients.
Integrating AI with humanity at heart
At Affinity Bridge, we see the transformative potential of AI. However, rather than using it to replace human creativity or experience, we leverage AI as an augmentation tool. This means human expertise and empathy remain central to everything we do. We’re experimenting with services like ChatGPT, Microsoft Copilot, JetBrains AI, Cursor, Claude and exploring others, yet our use of AI is tempered by our guiding principles and a clear commitment to human oversight.
We believe that large-scale AI owned by multinational corporations isn’t the long-term answer for humanity. These commercial models often prioritize profit over transparency and data sovereignty, making it challenging for organizations like ours—focused on privacy, justice, and environmental impact—to fully trust them.
Affinity Bridge is actively exploring self-hosted, open-source Large Language Models (LLMs) – such as Llama, DeepSeek, Phi, Mistral, and Gemma – and retrieval augmented generation (RAG) setups. By hosting models ourselves, we retain greater control over both the data (yours and ours) and the outputs they generate. This approach not only helps ensure our outputs meet stringent ethical standards, it also prepares us to pivot quickly as new, more responsible AI technologies emerge.
Ensuring data security and integrity
Data security is paramount in everything we do. Our team is FOIPPA-certified (Freedom of Information and Protection of Privacy Act), and we maintain strong practices around data security and governance. As part of our commitment to data sovereignty—especially for our Canadian clients who are committed to keeping their data in Canada—we work closely with CanTrust Hosting Co-operative to provide secure, Canadian-based web hosting solutions.
By favouring open-source frameworks, we reduce reliance on opaque corporate AI systems and reinforce our commitment to transparency and community collaboration. This ensures sensitive data can remain protected under robust Canadian privacy laws. We are experimenting and collaborating with CanTrust, and are excited about the impressive capabilities we are seeing from self-hosted models.
Mindful of environmental impacts
While we don’t yet have formal targets to minimize our environmental impact, we’re keenly aware of concerns such as water usage and energy consumption in training AI models. Our team will continue to explore and implement best practices for reducing AI’s carbon footprint, from optimizing model training to choosing energy-efficient data centres.
AI-enhanced results for clients
By integrating AI into our workflows, we’ve seen tangible improvements in how we serve our clients. AI-supported research, enhanced code development support, and automated testing are making our processes faster and smarter. And because humans remain in the loop, every AI-driven output is backed by real expertise. This means greater value for our clients, as we can spend more time on what really matters: creative problem-solving and meaningful innovation.
Join us on this journey
As we continue to define the ethical use of AI, we invite you to join us. Whether you’re looking to start a new project, explore the potential of open-source AI, or simply ensure your organization is properly securing your data and taking a human-centred approach to AI adoption, our team is here to help. We aim to set new ethical AI standards for tomorrow’s technology needs.
Contact us to discuss your next big project. Together, we can create solutions that are innovative, responsible, and a positive force in the world.
------------
* Key Image was created via DALL-E with prompts from our Creative Director, Jenn, and myself. Jenn then provided the final design to match our desired outcome.