Amazon employees are creating unnecessary AI tasks and agents to inflate the consumption of internal AI tokens and meet company usage targets [1].
This trend suggests a disconnect between corporate productivity metrics and actual operational efficiency. When companies tie performance reviews to arbitrary technical quotas, employees may prioritize meeting those numbers over performing meaningful work.
Staff members are reportedly engaging in a practice known as "token-maxxing" [3]. This involves using AI platforms for extraneous tasks to pump up internal usage scores [5]. According to reports, the company pressures staff to increase their reliance on AI and ties token consumption directly to performance metrics [1].
To satisfy these requirements, some workers are building AI agents that serve no practical purpose [2]. These agents exist solely to burn through credits and ensure the employee appears productive on internal leaderboards [2].
Inc. reported that more than six Amazon staffers were interviewed regarding these pressures [6]. These employees said they feel forced to prove their productivity by burning AI credits rather than focusing on the quality of their output [6].
Internal tracking systems monitor how many tokens employees use, creating a competitive environment based on consumption [3]. Because these metrics are tracked by human resources, workers said they feel the need to pretend they are more dependent on AI than they actually are [4].
Amazon has not provided a public response to these specific reports of gaming the system. The practice highlights a growing tension in the tech industry as companies rush to integrate generative AI into every facet of the workforce, regardless of the tool's actual utility for specific roles [1].
“Employees are engaging in a practice known as "token-maxxing" to pump up internal usage scores.”
The situation at Amazon illustrates the risk of 'metric fixation,' where the measurement of a goal becomes the goal itself. By quantifying AI adoption through token consumption rather than output quality, the company has created an incentive for employees to waste computational resources to protect their performance ratings. This may lead to skewed data regarding AI's actual effectiveness in the workplace.




