The following code is thoroughly legal in Swift:
You can also initialize a Double with an Int because Double is an IntegerLiteralConvertible
.
let myDouble: Double = 1
This kind of thing gets on my style nerves. If you were theoretically writing an opinionated guide to Swift Style, what kind of advice would you give here? What practice offers the greatest advantage when reading coding intent?
My first go at it is this: In any situation where you have the ability to accurately express the intended type of a constant value, prefer the typed variant over one that can be inferred or cast.
14 Comments
Is it more likely I’ll make an error by using the wrong type, or entering the wrong constant? For me, I’m more likely to mess up typing in the constant because there were a bunch of extra zeroes. So I prefer using the Int literals for Doubles, providing the values are going to be whole numbers anyways.
Favor expressiveness over succinctness. Or, save the additional processor cycles needed to initialize a Double with an integer literal. Take your pick.
The thing that matters is whether your choice helps you build better code. Expressiveness is great, but it’s way easier for most humans to parse “1, 2, 3, 4” than “1.00, 2.00, 3.00, 4.00” so you have to ask yourself whether the tradeoff is worth it. I have debug descriptions all over my code that crop floating point numbers to 2 places – that’s succinctness that isn’t even showing me accurate data… but it’s worth it, because humans aren’t good at comparing a list of numbers like “1.23785275” and “123.986787213.” I’m not, anyhow.
That’s not an integer. You just think it is because you carry a lot of programming legacy baggage. Ideally we shouldn’t have to declare it a double either if we use it as a Double, and only Double, within a function.
func bar(n: Double) {}
func foo() {
let x = 42
bar(x)
}
Honestly, floating-point is the one area where I would argue Swift doesn’t infer enough. Between SIMD and CoreGraphics, there’s enough floating-point types around that I really wish Swift was smarter about going between them.
I don’t understand what is bothering you here? do you expect it to be CGRect(x: 0.0, y: 0.0…?
I’m aiming for a style that best supports reading and understanding. Selecting a constructor that uses the same type seems better styled.
The CGRect example is not completely on-point. If there is any inference it’s the choice of initializers, not a compile-time promotion. Command-clicking on CGRect shows the canonical initializers are Void and (CGPoint, CGSize), and these are in an extension:
public init(x: CGFloat, y: CGFloat, width: CGFloat, height: CGFloat)
public init(x: Double, y: Double, width: Double, height: Double)
public init(x: Int, y: Int, width: Int, height: Int)
This isn’t CGRectMake(), where you can rely on C type promotion, which really is in genus Unsafe.
This is better than having to cast every parameter every time I calculate my bounds in the default floating-point type (which CGFloat is not — on some platforms it’s a typealias for Float). What I _intend_ is: “Make a CGRect with the values I have and don’t make me brush my teeth before you’ll let me, because we both know what I’m trying to do so don’t be a jerk about it.”
Code is more often read than written. Using integer constants, regardless of whether an initializer is available, to construct a floating point type, seems counter to best expression.
I completely agree with you.
“Code is more often read than written” is more often written when someone thinks their style is clearer than someone else’s!
If I want to draw a rect on screen with origin (0,0) and with width and height of 1 – CGRect(x: 0, y: 0, width: 1, height: 1) expresses exactly my intent. It’s 100% clear what the end result will be, without extra ‘.0’s getting in the way.
well a 1 is a 1 and a 1 is also a 1.0 and a 1.0000 and so on, all is fine
You should ask yourself – is this just my computer programmer’s OCD talking? When reading the code, does it make a difference trying to figure out what it does… or is it my internal desire for “neatness”
I agree with you. If the parameters types are floating points I think the parameters should be passed as such. I think that’s better for understanding.
Whenever you find some call like method(x: 0.0, y: 0.0) you know the parameters can be floating points.
I think that anything you can do (within some limits of course) that makes your code more clear for others (maybe even you in the future) is always good.