Skip to content

Conversation

@henry2craftman
Copy link

Proposed change(s)

Description

This PR establishes the initial framework for the UR16e Pick and
Place project within the ML-Agents repository. It introduces the
core components required to train a UR16e robotic arm for a
pick-and-place task using reinforcement learning.

Key Changes

  • 🤖 Agent Implementation (UR16Agent.cs): A new agent script has
    been added to manage the robot's learning process. It defines
    the state observations, action space, and reward function
    tailored for the pick-and-place task.
  • 🛠️ Unity Environment (UR16 agents.unity): A dedicated Unity
    scene is included, featuring the UR16e robot, the target object,
    and the necessary ML-Agents components for training and
    inference.
  • 📚 Documentation (README.md, README.ko.md): Comprehensive
    documentation has been added in both English and Korean. The
    READMEs explain the project's purpose, structure, and provide
    clear instructions on how to get started and run the simulation.

Purpose

The goal of this PR is to formally initialize the project and
provide a solid foundation for future development and
experimentation with the UR16e robot. By adding the core logic
and documentation upfront, it makes the project accessible and
understandable for other contributors.

How to Test

  1. Clone the branch.
  2. Open the project in Unity Editor (version 6000.0.42f1 or
    compatible).
  3. Open the Assets/UR16 agents.unity scene.
  4. Enter Play mode to run the simulation.
  5. Verify that the agent and environment are loaded correctly and the simulation runs without errors.
  6. (Optional) Review the code in UR16Agent.cs and the content of
    the new README files.

Vidoes
ML-Agent UR16 PPO 1
ML-Agent UR16 PPO 2
ML-Agent UR16 PPO 3
ML-Agent UR16 PPO 4
ML-Agent UR16 PPO 5

Useful links (Github issues, JIRA tickets, ML-Agents forum threads etc.)

Types of change(s)

  • Bug fix
  • New feature
  • Code refactor
  • Breaking change
  • Documentation update
  • Other (please describe)

Checklist

  • Added tests that prove my fix is effective or that my feature works
  • Updated the changelog (if applicable)
  • Updated the documentation (if applicable)
  • Updated the migration guide (if applicable)

Other comments

This is the initial commit for the UR16e pick and place project using Unity ML-Agents.

This project is built upon a clone of the standard Unity ML-Agents repository and introduces a specific implementation for training a UR16e robotic arm.

Key additions in this commit:
- UR16Agent: A new ML-Agent script (`UR16Agent.cs`)  designed for the pick and place task. It defines the agent's observations, actions, and reward structure.
- Unity Scene: The `UR16 agents.unity` scene, which contains the robot, target objects, and environment setup for training.
- Documentation: Multilingual README files (`README.md` for English, `README.ko.md` for Korean) explaining the project's purpose, structure, and how to run it.
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants