Week 7 - More.
April 28, 2025
In this week’s post, we explore the intricacies of the “altruism gene” and its role in a simulation where species must navigate an environment while interacting with food resources, chasers, and other agents. In particular, we will delve into how this gene is implemented within the code, how the simulation works, and how tensor processing was implemented even further in optimizing the agent’s behavior.
Altruism Gene in the Simulation
In the context of this simulation, the altruism gene influences how food resources are distributed among agents. Altruism refers to a behavior where individuals act in ways that benefit others at a cost to themselves. This gene is a key part of the “food collection” mechanism in the simulation.
When altruism mode is enabled, agents within a species who are near a food spot will share food resources based on their size. The larger the agent, the more food it receives. Specifically, the food is distributed proportionally to the agents’ relative sizes in relation to each other. This results in a more collaborative environment where agents of different species may work together in a way that benefits all parties, although the larger agents will always receive more.
Here’s the relevant section of the code where altruism mode is implemented:
if altruism_enabled:
# ALTRUISM MODE: Split food based on size ratios
size_sum = sum(SPECIES_RADII[pi] for pi, _ in fs.eaters)
for pi, idx in fs.eaters:
# Each agent gets food proportional to its size
ratio = SPECIES_RADII[pi] / size_sum
gain = FOOD_SCORE * ratio
self.populations[pi].fitness[idx] += gain
else:
# STEAL MODE: Only the largest agents get food
sizes = [SPECIES_RADII[pi] for pi, _ in fs.eaters]
maxr = max(sizes)
winners = [
(pi, idx) for (pi, idx), r in zip(fs.eaters, sizes) if r == maxr
]
if len(winners) == 1:
# Only one largest agent — it gets all the food
pi, idx = winners[0]
self.populations[pi].fitness[idx] += FOOD_SCORE
This section checks whether altruism is enabled. If it is, the food is distributed proportionally based on the relative size of each agent. If not, food is awarded to the largest agent (or no one, in the case of a tie).
Tensor Processing for Optimization
The core of this simulation involves the processing of large amounts of data, particularly the positions of agents, food, and chasers. This is where tensor processing with PyTorch becomes a crucial tool. Throughout the simulation, tensors are used to represent the positions, fitness values, and actions of agents, as well as environmental data such as food spots and chaser locations.
For example, in the Simulation class, the agent positions are managed as tensors to calculate distances and actions efficiently:
pos = pop.positions env = torch.stack([pos[:,0]/ENV_SIZE, pos[:,1]/ENV_SIZE, (ENV_SIZE-pos[:,0])/ENV_SIZE, (ENV_SIZE-pos[:,1])/ENV_SIZE], dim=1)
Here, the env tensor is created by normalizing the agent positions relative to the environment’s size (ENV_SIZE). This helps the neural network model understand the agents’ relative locations in the world.
Another key tensor operation occurs in the Chaser class, where the chaser’s movement is determined by calculating the distance to the nearest agent:
distances = torch.sum((agent_positions - pos_tensor)**2, dim=1) nearest_idx = torch.argmin(distances) nearest_pos = agent_positions[nearest_idx]
These tensor operations are essential for optimizing the simulation, allowing the agents to calculate distances and move efficiently within the environment.
Additionally, the neural network used by each agent (implemented in the BatchNeuralNetwork class) also leverages tensor operations for backpropagation and mutation. The forward pass and weight mutation are key parts of the evolutionary process:
x = F.relu(torch.bmm(x, self.w1)) x = F.relu(torch.bmm(x, self.w2)) ...
The torch.bmm() function is used for batch matrix multiplication, enabling efficient computation across multiple agents simultaneously. This allows for the training and evolution of a population of agents with minimal computational overhead.
Conclusion
In conclusion, the altruism gene plays a critical role in shaping the behaviors of agents in the simulation, influencing their interaction with food and other agents. The use of tensor processing ensures that these interactions are handled efficiently, even as the number of agents and the complexity of the environment increases. Whether altruistic or competitive, the agents’ evolution is governed by both their neural networks and their interaction with the environment, making for an engaging and dynamic simulation.
As we move into future weeks, we’ll continue to refine these models, exploring the deeper implications of altruism in evolutionary simulations and enhancing the tensor-based processing for even greater efficiency. Stay tuned for more!

Leave a Reply
You must be logged in to post a comment.