Skip to content

Conversation

@yuki-mt
Copy link
Member

@yuki-mt yuki-mt commented Feb 7, 2019

What is this PR for?

added label to EvaluationMetrics to identify which precision, recall and fvalue correspond to which labels.

This PR includes

  • add label to EvaluateResult class
  • pass label IO to rekcurd_pb2.EvaluationMetrics
  • fix test

What type of PR is it?

Feature

What is the issue?

N/A

How should this be tested?

python -m unittest test.test_dashboard_servicer

Copy link
Member

@keigohtr keigohtr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you check my comment?

self.recall = [0.0]
self.fvalue = [0.0]
self.option = {}
self.label = [0.0]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it OK to set list[float] here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.PredictLabel accepts List[float] and IO (grpc message)'s tensor's val is repeated float .

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean that the return type of label depends on the user's predictor as I wrote on rekcurd_worker_servicer.py.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@keigohtr
I removed default None value from EvaluateResult!

@codecov-io
Copy link

codecov-io commented Feb 8, 2019

Codecov Report

Merging #34 into master will increase coverage by 0.74%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #34      +/-   ##
==========================================
+ Coverage   81.17%   81.92%   +0.74%     
==========================================
  Files          14       14              
  Lines         712      708       -4     
==========================================
+ Hits          578      580       +2     
+ Misses        134      128       -6
Impacted Files Coverage Δ
rekcurd/rekcurd_dashboard_servicer.py 96.15% <100%> (+0.04%) ⬆️
rekcurd/utils/__init__.py 82.05% <100%> (+5.86%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 140cfdd...ad776e3. Read the comment docs.

Copy link
Member

@keigohtr keigohtr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you check it again please?

self.option = option
def __init__(self, num: int, accuracy: float, precision: List[float],
recall: List[float], fvalue: List[float], label: List[PredictLabel],
option: Dict[str, float] = {}):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, I think option type is just dict not Dict[str, float]...?

Copy link
Member Author

@yuki-mt yuki-mt Feb 8, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@keigohtr
Now gprc spec is map<string, float> here because I expected the option is just used as additional metrics (e.g. accuracy in a specific condition), so it needs to be Dict[str, float]

(it will be OK if we change grpc spec)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's OK at the moment.

Could you make an issue for this?
Additional metrics must be a general field for ML evaluation. Current spec specifies our use-case, so we need to fix it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did it! #35

Copy link
Member

@keigohtr keigohtr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yuki-mt yuki-mt merged commit 50b72c3 into master Feb 8, 2019
@yuki-mt yuki-mt deleted the feature/add_label_to_metrics branch February 8, 2019 07:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants