Scaling-Out an AI-Powered Recruiting Technology Solution That Screens Publicly Available Online Content To Help Employers Identify The Potential For Employee Misconduct

How an AWS Serverless Infrastructure was Successfully Leveraged to Enhance Growth and Efficiency While Reducing Costs and Operational Drag

Fama Technologies, headquartered in Los Angeles, CA, provides an AI-based recruiting software solution that screens candidates’ online publicly available information across more than 10,000 sites to help organizations identify the potential for employee misconduct and avoid toxic hires. Today, with its ability to search billions of publicly available posts and comments, it is the largest screening company of its kind and leader in the application of machine learning (ML) technology in background screening services.

Enterprise HR and talent acquisition leaders trust Fama to identify a range of online behaviors at the point of hire to protect their organizations, cultures, and brands from liability, degradation, and reputational harm. Using screening and risk analysis, Fama flags abusive behaviors from digital footprints, such as employee misconduct, threats, violence, and other indicators that are often missed in the hiring process. By doing so, enterprises can be more confident that employees will embody an organization’s core values and goals.

Fama places great emphasis on the integrity and legality of their innovative product and has implemented it as a consent-based model. Their solution triple authenticates profiles via a proprietary method that combines best-in-class artificial intelligence (AI) algorithms and trained investigators into a single workflow. Moreover, they consider themselves to be a “consumer reporting agency” and are compliant with all major regulatory requirements such as the Fair Credit Reporting Act (FCRA), General Data Protection Regulation (GDPR), and the Equal Employment Opportunity Commission (EEOC).

Business Challenge

Initially, Fama launched its business operations in 2015 relying exclusively on managed servers while using a Java platform and Kubernetes (K8s) ― an open-source system for automating deployments and managing containerized applications. This initial architecture required Fama to increase the number of servers and add another layer of coordination between them to keep up with demand any time they added new customers. This became expensive and time consuming.

To keep up with demand, Fama knew they needed a new architecture that was scalable, organized, easy to use and maintain, and reduced operational drag.  They needed to implement a new, ideal infrastructure that could support the anticipated growth of their innovative AI-powered platform from the ground up.

How JBS Solutions Helped

To make this a success, Fama partnered with JBS Custom Software Solutions (JBS Solutions) to assess the current state of its software architecture, development practices, and operations. As a result, JBS Solutions recommended that Fama move all its infrastructure to the Amazon Web Services (AWS) platform to enhance the scalability, robustness, and efficiencies of its AI-based software solutions. More than 90% of Fama’s business operations are now built on AWS.

Software Solution Organized Into Functional ML Groups

To organize its AI-based software solutions on the AWS platform, Fama divided its cultural fit business model into functional machine learning (ML) groups or areas.

Each of these ML instances runs on AWS Fargate―a serverless, pay-as-you-go compute engine that lets developers focus on building applications without the burden of managing servers. Here, these instances can efficiently evaluate content from an array of social media sites against Fama’s intelligent platforms. For example, the analysis platform analyzes the data and decides which items should be tagged for human review. To make this happen, AWS Fargate allows for a longer running, higher memory, heavy computation database to support many customer requests.

Back-End Orchestration

Fama’s back-end orchestration is handled on Amazon EventBridge, a serverless event bus that allows for event-driven applications to run at scale across AWS, existing systems, or software as a service (SaaS) applications. It enables the different Fama ML functional areas to communicate with each other.

Amazon EventBridge then streamlines information between instances of AWS Lambda― a serverless, event-driven compute service that runs code for virtually any type of application or backend service without provisioning or managing servers. Following this, information is then passed on to AWS Fargate instances.

Prior to using Amazon EventBridge, data streaming from one pipeline to another was not able to effectively scale. Now, with Amazon EventBridge and AWS Fargate, data can scale efficiently, and requests can run in parallel―significantly saving time and costs.

Front-End Deployment

Fama’s front-end is deployed on AWS Lambda with a standard user interface. Most of the business is done through API integrations.

Building, Training, and Deploying ML Models

Fama also uses Amazon SageMaker to train, evaluate, and serve their ML models. This solution offers a reliable infrastructure within their larger AWS footprint to help them focus on designing/building with much less attention on maintaining.

The new serverless instance also allows Fama to serve ML models with higher throughput and fewer infrastructure worries. A batch transform service helps them to run large models in the background. This results in generating better ML models in products through a distillation process.

Object Storage

Amazon Simple Storage Service (S3) ― an object storage service offering industry-leading scalability, data availability, security, and performance―is used for storing information at various stages in Fama’s software solution.

Most notably the data that is gathered from social media and other digital platforms is placed on Amazon S3 before it is ingested into the system.


By migrating to and adopting Amazon EventBridge, Fama has now successfully moved to an event-driven architecture that allows them to scale-out while efficiently handling traffic spikes and other usage patterns. It also decreases operational overhead while saving Fama significant monetary outlay by decreasing overall infrastructure costs.

In addition, by moving to AWS and making all of the related components scalable including the Amazon SageMaker analysis parts, Fama can now effectively scale-out to handle at least ten times the traffic that it was formerly able to manage without having to change the structure of its existing software platform.

To do so, Fama needs to increase the budget for the service to manage the number of increased requests going through the system. Simply put, when it comes to handling talent screening requests, the new architecture allows Fama solutions to easily and rapidly run as many screening requests as necessary―in parallel―without having to do anything else to make it happen.