Skip to content

Use implicit-cast: false in analysis? #312

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
isoos opened this issue May 10, 2018 · 15 comments
Closed

Use implicit-cast: false in analysis? #312

isoos opened this issue May 10, 2018 · 15 comments

Comments

@isoos
Copy link
Collaborator

isoos commented May 10, 2018

analyzer:
  strong-mode:
    implicit-casts: false

Suggestion from dart-lang/pub-dev#1263 and it was also detailed in flutter/flutter#14921 (comment)

/cc @jakobr-google, @kevmoo, @mraleph

@kevmoo
Copy link
Member

kevmoo commented May 10, 2018

This will break many folks, I think. It can help find runtime issues, but folks need to build for it or it'll create a sea of red. I don't think it's a good idea...

@isoos
Copy link
Collaborator Author

isoos commented May 10, 2018

As an upside, would it warn people what they need to fix for Dart 2?

@kevmoo
Copy link
Member

kevmoo commented May 10, 2018 via email

@isoos
Copy link
Collaborator Author

isoos commented Jun 6, 2018

@kevmoo: as we are moving closer to Dart2, would you consider the introduction of this in the analysis? The lack of it will break folks when they migrate to Dart2.

@kevmoo
Copy link
Member

kevmoo commented Jun 6, 2018

CC @leafpetersen @vsmenon – this is a tricky one.

Implicit casts don't always cause errors – but fixing all implicit casts often fixes errors...

@leafpetersen
Copy link
Member

Not sure I understand the context - this is about turning it on for this package? Or somehow turning it on for users of the package?

@isoos
Copy link
Collaborator Author

isoos commented Jun 6, 2018

Not sure I understand the context - this is about turning it on for this package? Or somehow turning it on for users of the package?

The questions is: shall we turn it on for all of the analyzed packages and make it part of the code quality score? (It is already enabled for pana)

@leafpetersen
Copy link
Member

Would this be just a separate bit? That is, you get 1 "point" (say) if your code passes the analyzer, and another point if it passes with --no-implicit-casts? If so I think it would be fine. If not though, I think it's a big ask of folks.

@kevmoo
Copy link
Member

kevmoo commented Jun 6, 2018

Would this be just a separate bit? That is, you get 1 "point" (say) if your code passes the analyzer, and another point if it passes with --no-implicit-casts? If so I think it would be fine. If not though, I think it's a big ask of folks.

Right now there are no bonus points. You start w/ a perfect score and then you loose points if you miss things.

I could imagine some "extra love" for being implicit cast clean...but that's adding a new feature

@isoos
Copy link
Collaborator Author

isoos commented Jun 6, 2018

What if we would turn them on, but classify them as hints? That way it would be visible in the list of suggestions, but the penalty would be barely noticeable. I'm not sure if it has a matching key under analyzer: errors: though...

@kevmoo
Copy link
Member

kevmoo commented Jun 6, 2018

I worry about the noise here. I'll leave it to @mit-mit to handle, though.

@mit-mit
Copy link
Member

mit-mit commented Jun 7, 2018

I'm not a big fan of having the score reflect quality for something that is opt-in. If we think this flag is high value, why is it not on by default?

@isoos
Copy link
Collaborator Author

isoos commented Jun 11, 2018

I'm not a big fan of having the score reflect quality for something that is opt-in. If we think this flag is high value, why is it not on by default?

I think there is some space between "opt-in and we don't care about it" and "strong standard that is enforced". We shall consider the questions:

  • Do we want people to write (publicly shared) code to match certain standards?
  • How much effort is it to rewrite the existing code to match those standards?
  • How urgent is that change to happen? In other words: on which schedule and signaling do we want to indicate the urgency of that change?

For example we have the following linter rules turned on for all analysis:

  rules:
    - camel_case_types
    - hash_and_equals
    - iterable_contains_unrelated_type
    - list_remove_unrelated_type
    - unrelated_type_equality_checks
    - valid_regexps

We have even stronger rules for Flutter packages:

    enableStrictCallChecks: true
  strong-mode: true
  errors:
    # treat missing required parameters as a warning (not a hint)
    missing_required_param: warning
    # treat missing returns as a warning (not a hint)
    missing_return: warning

These are opt-in for the average developer, but we felt that not matching these rules indicates a risk of potential error, and we should include it in the code quality score. One thing that happened really early on: developers who cared did fix the issues we have reported, although the penalty on their code quality score was minuscule.

And the plus side is, there is a developer-driven override from all this: if the code has a file-level or an inline suppression of the linter rule, dartanalyzer won't report it, and we would not make it part of the report or the score. I think this is a good balance: the developer looks at a report, and makes a conscious choice and judgement call that the specific issue may not be looked at (because whatever reason it has in the codebase).

@kevmoo
Copy link
Member

kevmoo commented Jul 19, 2018

We should close this issue, me thinks...

@isoos
Copy link
Collaborator Author

isoos commented Jul 19, 2018

agreed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants