Rubin2020_PCW slack archives day4-thu-slot1a-plenary-scikeynote 2020-07-15---2020-08-13

Wed 2020-07-15 02:58PM
@Ranpal (she/her/hers) has joined the channel
Wed 2020-07-15 03:01PM
@Melissa Graham has joined the channel
Wed 2020-07-15 03:01PM
@Emily (she/her) has joined the channel
Keith Bechtol Thu 2020-07-23 05:43PM
@Keith Bechtol set the channel topic: <https://project.lsst.org/meetings/rubin2020/agenda/session/ai-sky-science-keynote>
Melissa Graham Wed 2020-08-05 03:00PM
@Melissa Graham set the channel topic: Plenary 4 - "AI in the Sky" Science Keynote by Dr. Brian Nord
Use the Zoom Q&amp;A and this channel to ask questions during the plenary.
Webpage: <https://project.lsst.org/meetings/rubin2020/agenda/session/ai-sky-science-keynote>
Melissa Graham Fri 2020-08-07 07:41PM
@Melissa Graham set the channel topic: "From disruption, opportunity: the current and future impact of AI on astronomy" by Dr. Brian Nord
<https://project.lsst.org/meetings/rubin2020/agenda/session/science-keynote-disruption-opportunity-current-and-future-impact-ai-astronomy>
Ranpal (she/her/hers) Thu 2020-08-13 07:03AM
Session:_Plenary 4 - Science Keynote: From disruption, opportunity: the current and future impact of AI on astronomy.
Speaker: Dr Brian Nord @iamstarnord (he/him)
Date/time: Thursday August 13, 2020 - 06:00 HST - 09:00 PT - 12:00 EDT - 18:00 CEST - 02:00 AET +1
Connection:_ https://stanford.zoom.us/j/99549243447?pwd=NHNBejYyTHlIbHVkWWs0dHNBQ1p2UT09
Passcode: 257500
Details:_ https://project.lsst.org/meetings/rubin2020/agenda/session/science-keynote-disruption-opportunity-current-and-future-impact-ai-astronomy
Ranpal (she/her/hers) Thu 2020-08-13 12:12PM
Vote now! Click :a: or click :b:
Alison Rose Thu 2020-08-13 12:13PM
B
wvanreeven Thu 2020-08-13 12:13PM
Both
Luis Pineda Thu 2020-08-13 12:13PM
A
Sara Lucatello Thu 2020-08-13 12:13PM
both
Stefano Cavuoti Thu 2020-08-13 12:13PM
B
Jeno Sokoloski Thu 2020-08-13 12:13PM
A?
Jari Backman Thu 2020-08-13 12:13PM
A
andyxl Thu 2020-08-13 12:13PM
B
A Emery Watkins Thu 2020-08-13 12:13PM
A
Chuck Claver Thu 2020-08-13 12:13PM
A
Paula Thu 2020-08-13 12:13PM
A
Rob Sparks (he/him) Thu 2020-08-13 12:13PM
A
Adam Thornton Thu 2020-08-13 12:13PM
"probably both"
Sara (Rosaria) Bonito Thu 2020-08-13 12:13PM
Both
David Buckley Thu 2020-08-13 12:13PM
B
Robert Lupton Thu 2020-08-13 12:13PM
(click above on the red buttons)
Federica Bianco (she/her/hers) Thu 2020-08-13 12:13PM
click A or B in the original message
Nino Cucchiara (he/him/his) Thu 2020-08-13 12:13PM
both
Michael Strauss (he/his) Thu 2020-08-13 12:13PM
Both
Nino Cucchiara (he/him/his) Thu 2020-08-13 12:14PM
it is a bias, we are in a AI session
Adam Thornton Thu 2020-08-13 12:14PM
"Neither" seems unlikely given the topic of the talk.
Frossie Thu 2020-08-13 12:15PM
Does it matter? The AI one would not exist without the human one
iamstarnord (he/him) Thu 2020-08-13 01:17PM
yeah, this is definitely a good point. The AI one had to be trained on something.
iamstarnord (he/him) Thu 2020-08-13 01:17PM
For me, it's interesting to think about whether we can tell the difference.
iamstarnord (he/him) Thu 2020-08-13 01:18PM
I think it's maybe also interesting to imagine from here what other things might be created that we as humans find interesting, but wouldn't know how to create.
iamstarnord (he/him) Thu 2020-08-13 01:19PM
For example, with AlphaGo, the human player said they never even considered (maybe said "would have considered) the strategies that it saw the AI use to beat it.
Frossie Thu 2020-08-13 01:19PM
i am not dissing the question, to be clear, but i think it is worth noting that most examples do revolve around "here is how well AI can imitate human genius", Which is fine, but it's not creating more genius
iamstarnord (he/him) Thu 2020-08-13 01:19PM
yeah, exactly.
iamstarnord (he/him) Thu 2020-08-13 01:19PM
I prefer to think of these as a jumping off point for what we could do with them.
Frossie Thu 2020-08-13 01:19PM
agreed
Frossie Thu 2020-08-13 01:21PM
(this is commonly an issue with diagnostic AIs: they can beat the average doctor, but you need great doctors to train them - they are a multiplier not an innovator, kinda like washing machines - you still end up with clean clothes but did not have to expend labor)
Kennylo Thu 2020-08-13 12:15PM
AI succeeded in 'fooling' us.
Robert Lupton Thu 2020-08-13 12:15PM
Or vice-versa?
Ranpal (she/her/hers) Thu 2020-08-13 12:17PM
It'll be interim question time in ~ 10 minutes -ask your questions in the Zoom Q&A or here
Ashley Villar (she) Thu 2020-08-13 12:19PM
wish we could show appreciation for these jokes in real time :joy:
Konstantin Malanchev Thu 2020-08-13 12:22PM
Why we should call it AI even if we use simple ML methods? Is the least squares method an AI? Where is intelligence in least squares?
iamstarnord (he/him) Thu 2020-08-13 01:20PM
This is a good question, and I hope as the fields move along, that we come up with clearer terminology and clearer ways of thinking about what different algorithms really do.
iamstarnord (he/him) Thu 2020-08-13 01:21PM
One problem with some definitions of ML/DL/AI is that you could then call LS or linear regression ML. That's kind of funny/cheeky, but not necessarily useful.
iamstarnord (he/him) Thu 2020-08-13 01:21PM
I think, currently the distinction is in how we think about the parameters and their origin.
iamstarnord (he/him) Thu 2020-08-13 01:22PM
But then again, if we use that framing that I'm advocating, maybe it still kicks the can down the road regarding what we mean by intelligence or artificial intelligence.
Konstantin Malanchev Thu 2020-08-13 01:23PM
Thank you!
iamstarnord (he/him) Thu 2020-08-13 01:23PM
that is, if the parameters being unintelligible implies artificial intelligence that may be tantamount to saying it's only intelligence if we don't understand how it works.
iamstarnord (he/him) Thu 2020-08-13 01:23PM
lol, I think you and I may have falsified the way I presented that spectrum of models.
iamstarnord (he/him) Thu 2020-08-13 01:23PM
thanks! It's a thinker.
Konstantin Malanchev Thu 2020-08-13 01:25PM
My feeling is that AI is ML that has strong extrapolation skill. E.g. your example with translation between languages
Konstantin Malanchev Thu 2020-08-13 01:38PM
Thank you again for this discussion!
Alison Rose Thu 2020-08-13 12:22PM
Could someone spell the name of the author Nord just referenced, please.
brant Thu 2020-08-13 12:23PM
Francois Chollet
Alison Rose Thu 2020-08-13 12:23PM
Thank you Brant.
iamstarnord (he/him) Thu 2020-08-13 01:24PM
https://twitter.com/fchollet
iamstarnord (he/him) Thu 2020-08-13 01:24PM
https://arxiv.org/abs/1911.01547
Orlando Mendez Thu 2020-08-13 12:22PM
intelligence itself is a difficult concept to frame :-)
Jim Annis Thu 2020-08-13 12:26PM
But I am 100% sure octopuses are intelligent.
Orlando Mendez Thu 2020-08-13 12:27PM
precisely, same as dolphins, and for sure ants, albeit in a more collective sense :-)
iamstarnord (he/him) Thu 2020-08-13 01:25PM
I haven't read all of Chollet's paper, but I find what I've seen in it to be useful. https://arxiv.org/abs/1911.01547
Kennylo Thu 2020-08-13 12:24PM
Ability to learn and to apply that learning?
Jim Annis Thu 2020-08-13 12:28PM
when a tree reacts to its environment at the celluar level and changes, say, to endure a drought- intelligence? it learned and applied the learning
Kennylo Thu 2020-08-13 12:29PM
All life forms are intelligent.
Adam Thornton Thu 2020-08-13 12:31PM
will not make snarky political comment
Merlin Thu 2020-08-13 12:31PM
There he is :smile:
Konstantin Malanchev Thu 2020-08-13 12:33PM
It is a definition of ML =)
iamstarnord (he/him) Thu 2020-08-13 01:28PM
@Jim Annis Yeah, I'm excited for a new way of thinking about this. I think it's really fraught right now, and it is potentially sending us not very useful directions in research.
iamstarnord (he/him) Thu 2020-08-13 01:29PM
If we just want to make a fast inference model, then maybe we should just say that? And then be ready to deal with the consequences of challenges with error bars and bias related to optimization on an insufficient data set.
Melissa Graham Thu 2020-08-13 12:36PM
Bey's Theorem :clap:
parejkoj Thu 2020-08-13 12:37PM
Serious question: is there a current state-of-the-art in measuring how "close" your training set is compared to your target set?
iamstarnord (he/him) Thu 2020-08-13 01:37PM
I think this would be a fun project!
iamstarnord (he/him) Thu 2020-08-13 01:37PM
@Aleksandra Ciprijanovic thinks a lot about domain adaptation, too!
Aleksandra Ciprijanovic Thu 2020-08-13 01:45PM
Yes this is something that I am very interested in. Generally the way that is done is looking in the latent space of features extracted from the data from the two domains. Then comparing the two distributions I some way. There are approaches known in statistics for quite some time that are also used in ML now...for example KL divergence, Maximum Mean Discrepancy or some other metrics. I haven't seen a very extensive study of this in context of machine learning though, so there is a lot more to be done.
Roy Williams Thu 2020-08-13 12:37PM
I have seen several talks about neural networks and finding features in images. What features are relevant when classifying light-curves?
iamstarnord (he/him) Thu 2020-08-13 01:37PM
Good question!
iamstarnord (he/him) Thu 2020-08-13 01:38PM
The first one-d neural net project I did was back in 2017 or 2018 when I played with stellar spectrum data with Adrian Price-Whelan.
iamstarnord (he/him) Thu 2020-08-13 01:39PM
At that time, Keras/TF wasn't ready for one-d data, so we had to much about a bit, but it worked really well for regression in the end.
iamstarnord (he/him) Thu 2020-08-13 01:39PM
But, I digress.
iamstarnord (he/him) Thu 2020-08-13 01:40PM
If you're using raw light curves as the input data, then in representation learning, the deep net will find the features that are most significant.
iamstarnord (he/him) Thu 2020-08-13 01:41PM
One could look at the activation maps from a 1D NN trained on this data and see what pops out? I haven't tried this before, but I think it would be a fun thing to look at.
iamstarnord (he/him) Thu 2020-08-13 01:41PM
Also, it might help investigate some interpretability a bit, because the data is simpler than 2d images.
iamstarnord (he/him) Thu 2020-08-13 01:41PM
I would guess that it would find curvature as a feature.
Konstantin Malanchev Thu 2020-08-13 12:38PM
Can methods like normalised flows or inversible NN help to solve error propagation problem?
iamstarnord (he/him) Thu 2020-08-13 01:42PM
I'm still catching up on these, and I have a sense that the answer is, 'yes at least somewhat'. Happy to talk again in future after I understand these better.
iamstarnord (he/him) Thu 2020-08-13 01:43PM
I think some of our RubinObs colleagues have thought about them though. Maybe they will chime in!
Alison Rose Thu 2020-08-13 12:38PM
Question for Q&A: of all the AI pioneers currently alive, Nord singled out Chollet. Please say more about why his work stands out for you.
Orlando Mendez Thu 2020-08-13 12:41PM
yes, esp. when compared to the three ACM Turing award winners
https://awards.acm.org/about/2018-turing
Jim Annis Thu 2020-08-13 12:44PM
Francois Chollet _is the author of Keras, one of the most widely used libraries for deep learning in Python.
Jim Annis Thu 2020-08-13 12:47PM
So, he wrote a tool that made it easy for people to experiment with a wide range of ML techniques
iamstarnord (he/him) Thu 2020-08-13 01:43PM
I mentioned Chollet, because I appreciate what his focus has been during the last few years --- people and tools.
iamstarnord (he/him) Thu 2020-08-13 01:45PM
While it's clear that Bengio, Hinton, and LeCun have done a lot of useful work over the past couple of decades, I'm wary of engaging in the development of 'heroes', which is now part of the discourse and zeitgeist around these three folks.
iamstarnord (he/him) Thu 2020-08-13 01:45PM
I think to over emphasize their potential for impact on the field at least treads close to enabling the accumulation of power.
iamstarnord (he/him) Thu 2020-08-13 01:46PM
Bengio is a co-author of the deeplearningbook that I mentioned.
Alison Rose Thu 2020-08-13 01:48PM
Yes. I understand.
Robert Nikutta Thu 2020-08-13 12:38PM
Q for @iamstarnord (he/him) In addition to the problem of error estimation/propagation in AI, is the issue of explainability. Can you comment on it maybe? Example: many courts use black-box AI systems to sentence people to jail time, with no explainability even demanded from the AI system. Challenges to this have been repeatedly struck down in courts. How can we hope to put AI to good use if we can't even ensure that it "explains itself"?
Robert Nikutta Thu 2020-08-13 12:42PM
Relevant link on this: https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/
iamstarnord (he/him) Thu 2020-08-13 01:48PM
I see the statistical rigor of the algorithms as a key element of explainability, at least with respect to inputs and outputs. So I think that gets us there partially --- if we can make predictions we're confident in then we can probe the models more precisely.
iamstarnord (he/him) Thu 2020-08-13 01:48PM
Another avenue of pursuing interpretability/explainability is trying to find patterns within the nodes within the layers. People have investigated topological modeling for example.
iamstarnord (he/him) Thu 2020-08-13 01:51PM
I think even if AI could explain itself, people would still try to misuse it, as they have so many other technologies. I'm not sure how much of a problem it is for the tool, as opposed to a problem with our policies, which is a human problem.
Robert Nikutta Thu 2020-08-13 03:03PM
Thank you. I agree strongly on the human problem. For the specific case(s) of sentencing people, or weaponized AI: we need to regulate it, and not allow for the real and serious responsibility to be outsourced. "You will be judged by a jury of your peers" is a good point to start at.
Chris Lintott Thu 2020-08-13 12:40PM
The paper @iamstarnord (he/him) is talking about is Darryl Wright's supernova paper: https://arxiv.org/abs/1707.05223
Alison Rose Thu 2020-08-13 12:45PM
@Chris Lintott Hi Chris. We met at Oxford when Shipsey asked you to make some time. I'd like to circle back, after so much time, now that I understand more, to talk to you again about citizen science: the impact of SDSS and the potential for Rubin and the LSST. I'm interested in building community and DEI.
iamstarnord (he/him) Thu 2020-08-13 01:53PM
I also had this one in my head.
https://arxiv.org/abs/1802.08713
Chris Lintott Thu 2020-08-13 01:56PM
Ah, thanks, Brian. Didn't mean to reference for you.
iamstarnord (he/him) Thu 2020-08-13 02:37PM
no worries.
iamstarnord (he/him) Thu 2020-08-13 02:37PM
all useful!
Alison Rose Thu 2020-08-13 12:41PM
Another question: What is the term being discussed? "least squares" or "Lee's Squares"? Again, thank you!
Renee Hlozek Thu 2020-08-13 12:41PM
least squares
Rstreet Thu 2020-08-13 12:42PM
https://en.wikipedia.org/wiki/Least_squares#:~:text=The%20method%20of%20least%20squares,results%20of%20every%20single%20equation .
Dara Norman Thu 2020-08-13 12:42PM
With respect to your history description and also these questions about the use of AI in physics, do you have comments about the experiences of the scientists for breaking through AI deserts or coming up with how to use AI to ask questions. Trying to get back at the ideas of how we do science and who gets to do it.
iamstarnord (he/him) Thu 2020-08-13 01:54PM
yeah, this is a great question Dara.
iamstarnord (he/him) Thu 2020-08-13 01:55PM
I think my original thought still stands for the first part: I think we should respect the tool and make sure we understand it, rather than apply willy nilly.
iamstarnord (he/him) Thu 2020-08-13 01:56PM
I think this also applies to how we do science: I think it pays more to investigate the algorithms and make sure they're used in safe way before jumping into an application.
iamstarnord (he/him) Thu 2020-08-13 01:57PM
Regarding, who gets to do it: some results end up getting a lot of press or PIs end up getting a lot of opportunities because they already have 'big names', even if they don't know what they're talking about with respect to a given tool. There's an accumulation of cache and thus influence which permits people to use tool they don't understand, and they aren't always held accountable for it.
Robert Nikutta Thu 2020-08-13 03:14PM
"Those who already have, shall have more". But on the serious side, the current(?) modus operandi, at least in the capitalist society, is to do it if you have the means, and deal with the fallout later. Examples: satellite megaconstellations, work on chimeras with human genes, "self-driving" cars that actually kill people (yeah, eventually fewer than human drivers do, but I can't blame the AI car, can I?). All these technologies are being deployed in the window between their inception and their regulation. I'm not too optimistic that one day this wild-west mentality won't cost all of us dearly.
Frossie Thu 2020-08-13 12:43PM
I do understand Brian's point but my inner five year old can't help sniggering at "it's Artificial Intelligence if YOU don't understand what the parameters mean physically"
Matthew (he/his) Thu 2020-08-13 12:44PM
There's a lot of AI in undergrad and graduate schools then!
Margaux Lopez Thu 2020-08-13 12:48PM
also my inner (outer?) 20-year-old
Knut Olsen Thu 2020-08-13 12:47PM
What do you think the role of human scientists will be if AI is able to make scientific discoveries independently? Noting that there is a Grand Challenge for AI to make Nobel Prize-worthy discovery in medicine by 2050.
Robert Blum Thu 2020-08-13 12:49PM
Do we care about science results if we are not involved?
Knut Olsen Thu 2020-08-13 12:51PM
I think so! AI is much better at chess than humans, but humans are still interested in chess tournaments involving people, much less so about those involving machines. But AI has become a tool to help humans play better chess. I could imagine AI pointing the way to new scientific discoveries, and the role of humans being to understand them, as well as make their own.
Robert Blum Thu 2020-08-13 01:09PM
Yes, in your vision its a collaboration. I was referring to the case where its independent.
Knut Olsen Thu 2020-08-13 01:13PM
So if a machine makes a startling discovery, I think human scientists would very naturally work to understand it and to test it. Imagine a machine declaring that it had discovered what kind of particle makes up DM. We wouldn't not want to know it just because it was done by a machine, and would probably be extra energized at following up the breakthrough. Maybe I'm misunderstanding your point, though?
Robert Blum Thu 2020-08-13 01:52PM
Not really. You are thinking of ways machines can contribute to things we are actively working on and interested in. I guess I'mm suggesting at some point we may lose interest as we are more and more removed from the discovery process.
Jim Annis Thu 2020-08-13 02:05PM
Have to say- the chess players were very interested in how alpha-go played the brute force methods; machine against machine, but the style was so different it intrigued the masters. they -learned- from watching the games.
iamstarnord (he/him) Thu 2020-08-13 02:46PM
@Knut Olsen This is something I'm actively working on, and I'm open to the possibility of it. I'm not sure I would bank on it being the AI that we see today, but I can imagine automated discovery mechanisms.
Knut Olsen Thu 2020-08-13 02:48PM
Ok, thanks!
iamstarnord (he/him) Thu 2020-08-13 02:49PM
@Robert Blum This is something I've wondered/worried about.

My current approach is perhaps to extrapolate to how we've responded to technological advancements in the past, when they've removed the need for certain kinds of work. For example, we don't need to look up solutions to differential equations that much any more. But, we have found more questions to ask.
Robert Blum Thu 2020-08-13 02:56PM
Yes @iamstarnord (he/him) , that makes sense. Aligned with @Knut Olsen vision I think. Use it as leverage, maybe very substantial leverage. I think a lot of people are talking about AI and the possibibility of it making a mistake after we have ceded some authority to it. I am less worried about that since AI will overall make far fewer mistakes than humans. But the problem I see is that AI will make it much easier to leverage bad human behavior. Think of the internet.
iamstarnord (he/him) Thu 2020-08-13 03:00PM
with respect to sci discovery (had to pause a minute):
Maybe this is an interesting example: an AI creates the intersection of two scientific fields and then asking question that hasn't been asked before, and then discovering some new physical phenomenon. Let's say that this new discovery removes the need for some investigation that was taking place.

I could see us continuing to wonder and then start asking questions based on that discovery, and maybe that cycles through. Eventually, if we operate more slowly than the AI, I think it gets into territory where I'm still wondering how to think about what I would do with my sense of curiosity.
Knut Olsen Thu 2020-08-13 03:06PM
I wonder if the realm of discovery is so large that there will always be room for human intuition to explore territory not covered by AI. Or if AI will have blind spots that humans can bypass. Echoing Roger Penrose's Emperor's New Mind here, I think.
Robert Blum Thu 2020-08-13 03:12PM
both of your visions are more encouraging than mine!
Luis Pineda Thu 2020-08-13 12:48PM
It's right to say that bias in labeling, and thereby classification, it's one of the main challenges on AI?
Luis Pineda Thu 2020-08-13 12:52PM
Or is posible to build AI models to recognize bias and there by, build more accurate classifications?
Orlando Mendez Thu 2020-08-13 12:48PM
on the topic of recidivism + AI, here's a https://cacm.acm.org/careers/246743-ai-examines-early-intervention-opportunities-for-parolees/fulltext
Eli Rykoff Thu 2020-08-13 12:49PM
Can "AI" do physics without error propagation? What is a measurement without an error? Is it "physics"?
Jim Annis Thu 2020-08-13 12:49PM
I supppose I'll have to look and see if Newton used error propgation in Principa Mathematica
Roy Williams Thu 2020-08-13 12:50PM
Similarly non-detection at a certain limiting magnitude. How can that be used?
Konstantin Malanchev Thu 2020-08-13 12:52PM
The most obvious area of ML usage is to load some work from human to machine. For example, we can train ML to make expert's work which doesn't correspond to error propagation. The example is a classification problem, e.g. light-curve transient classification, spectrum star classification, image classification (star-vs-galaxy).
iamstarnord (he/him) Thu 2020-08-13 01:59PM
Precisely, the problem, @Eli Rykoff
iamstarnord (he/him) Thu 2020-08-13 02:00PM
We've tried to explore this a little bit:
https://arxiv.org/abs/2004.10710
Francisco Forster Thu 2020-08-13 12:50PM
How can we transition from understanding correlation to understanding causality?
Robert Nikutta Thu 2020-08-13 12:52PM
There's some work being done on it: https://www.theatlantic.com/technology/archive/2018/05/machine-learning-is-stuck-on-asking-why/560675/
parejkoj Thu 2020-08-13 12:50PM
"In the dome where it happens" :thumbsup:
Adam Thornton Thu 2020-08-13 12:52PM
probably shouldn't ask whether empathy, compassion, and justice are inherently antithetical to capitalism, much less silicon-valley-funded libertarian-sociopath ai-money late-stage capitalism
Alison Rose Thu 2020-08-13 01:00PM
Loved your Lightening Story. I look forward to reconnecting with you. I've scoped out my Doc project, and speced data storage. It would great to talk to you about my solution for remote screening and editing. do you have time in the next few days? could we talk over the weekend?
Adam Thornton Thu 2020-08-13 01:05PM
Sure, the weekend is good. Saturday is better than Sunday, though sometime between noon and 3 Sunday would work. Friday post-conference (but not 1:30-2:30) works for me too.
Chuck Claver Thu 2020-08-13 12:52PM
With Rubin the discoveries don't happen in the dome - they happen in your mind
Jim Annis Thu 2020-08-13 12:53PM
kepler vs brahe?
Frossie Thu 2020-08-13 12:53PM
attn @Amanda Bauer
Frossie Thu 2020-08-13 12:52PM
good lord, we need t-shirts with that
Jim Bosch (he/him) Thu 2020-08-13 12:53PM
I want to see the graphical design contest for this.
Merlin Thu 2020-08-13 12:53PM
Chuck's comment, or Adam's?
Jim Bosch (he/him) Thu 2020-08-13 12:54PM
I assumed Chuck's.
Merlin Thu 2020-08-13 12:54PM
(I was joking)
Merlin Thu 2020-08-13 12:55PM
Pretty sure Frossie doesn't want more Adam stuff, and to be wearing it on her body!
Dave Morris Thu 2020-08-13 12:53PM
Thank you for saying this.
mrawls Thu 2020-08-13 12:53PM
:clap:
Robert Blum Thu 2020-08-13 12:53PM
:clap:
Eli Rykoff Thu 2020-08-13 12:53PM
:clap:
brant Thu 2020-08-13 12:53PM
Well said, @iamstarnord (he/him) !
Hiranya Peiris Thu 2020-08-13 12:53PM
Wonderful talk!
Luis Pineda Thu 2020-08-13 12:53PM
thanks
ajc Thu 2020-08-13 12:53PM
:clap:
Emily (she/her) Thu 2020-08-13 12:53PM
:clap:
A Emery Watkins Thu 2020-08-13 12:53PM
:clap:
bmiller Thu 2020-08-13 12:54PM
:clap:
Tom Glanzman Thu 2020-08-13 12:54PM
:clap:
Francois Lanusse Thu 2020-08-13 12:54PM
:clap:
Andrés Plazas (he/him/his) Thu 2020-08-13 12:54PM
:clap:
Paul Price Thu 2020-08-13 12:54PM
Is there where we discover that this entire talk was a deep fake?
Robert Nikutta Thu 2020-08-13 01:09PM
That would have been a real coup!
iamstarnord (he/him) Thu 2020-08-13 02:01PM
the ultimate rick-roll
Alex Gagliano [he/him] Thu 2020-08-13 12:54PM
:clap:
Rstreet Thu 2020-08-13 12:54PM
:clap:
Frank Kenney Thu 2020-08-13 12:54PM
:clap:
Orlando Mendez Thu 2020-08-13 12:54PM
:clap:
Peter Melchior Thu 2020-08-13 12:54PM
:clap:
Yao-Yuan Mao Thu 2020-08-13 12:54PM
:clap:
Adam Thornton Thu 2020-08-13 12:54PM
:clap:
Michael Reuter Thu 2020-08-13 12:54PM
:clap:
leanne Thu 2020-08-13 12:54PM
:clap:
Robert Nikutta Thu 2020-08-13 12:54PM
Thank you!
Nushkia Chamba Thu 2020-08-13 12:54PM
:clap:
Tiago Ribeiro Thu 2020-08-13 12:54PM
:clap:
n8pease Thu 2020-08-13 12:54PM
:clap:
Suzanne Thu 2020-08-13 12:54PM
:clap:
Francisco Forster Thu 2020-08-13 12:54PM
:clap:
Alex Drlica-Wagner (he/him) Thu 2020-08-13 12:54PM
:clap:
Ken Smith Thu 2020-08-13 12:54PM
:clap:
sthomas Thu 2020-08-13 12:54PM
:clap:
Imran Hasan Thu 2020-08-13 12:54PM
:clap:
Renee Hlozek Thu 2020-08-13 12:54PM
excellent talk @iamstarnord (he/him)
T Sloan Thu 2020-08-13 12:54PM
:clap:
Javier Sanchez Thu 2020-08-13 12:54PM
:clap:
Hiranya Peiris Thu 2020-08-13 12:54PM
:clap:
Dave Young (QUB, UK) Thu 2020-08-13 12:54PM
:clap:
Benne Holwerda Thu 2020-08-13 12:54PM
:+1:
Brandon Kelly Thu 2020-08-13 12:54PM
:clap:
Jgates Thu 2020-08-13 12:54PM
:clap:
Lorena Hernandez Thu 2020-08-13 12:54PM
Thank you! Very nice talk!
Dave Morris Thu 2020-08-13 12:54PM
:clap:
Somayeh Khakpash Thu 2020-08-13 12:54PM
:clap:
Dara Norman Thu 2020-08-13 12:54PM
:clap:
Sara (Rosaria) Bonito Thu 2020-08-13 12:54PM
:clap:
Aleksandra Ciprijanovic Thu 2020-08-13 12:54PM
:clap:
Chien-Hsiu Lee Thu 2020-08-13 12:54PM
:clap:
Steve Ritz Thu 2020-08-13 12:54PM
:clap:
Antonio Vazquez Thu 2020-08-13 12:54PM
:clap:
Kathy Vivas Thu 2020-08-13 12:54PM
:clap:
Mark Newhouse Thu 2020-08-13 12:54PM
:clap:
Gonzalez Alma Thu 2020-08-13 12:54PM
:clap:
Dragana Ilic Thu 2020-08-13 12:54PM
:clap:
andyxl Thu 2020-08-13 12:54PM
Q; most of the world is concerned with decision and action, whereas science is about (communicable) understanding. So will there ever be money to support making interpretable AI??
Alison Rose Thu 2020-08-13 12:56PM
Science is about understanding the nature of things. I do wonder if what 'most of the world' is concerned about would be different if we invested more in education; if we prioritized every child's education.
andyxl Thu 2020-08-13 12:57PM
The thing is that decision is what AI is good at.
Chris Lintott Thu 2020-08-13 12:59PM
I think there's a strong need for it in the medical sphere, if aspirations to use AI for diagnosis etc are to be more than just aspirations.
Dave Morris Thu 2020-08-13 01:02PM
There is a strong incentive to make commercial profit in the medical sphere.
Roy Williams Thu 2020-08-13 01:02PM
Neural networks are famously incomprehensible - a computer can come up with a good answer, but not be able to explain what led to the conclusion. Been Kim is developing a "translator for humans" so that we can understand when artificial intelligence breaks down. https://www.quantamagazine.org/been-kim-is-building-a-translator-for-artificial-intelligence-20190110/
andyxl Thu 2020-08-13 01:13PM
@Chris Lintott ... if medical AI is just for taking the medical decisions, then it is no different. But if the doctor uses it as a tool, then I guess this like Physics. The doctor has to reserve their right to make the final decision; when the AI system has done its thing, the doctor says to it "well, thats what you think, but why?convince me". So its about machine-human communication..
Chris Lintott Thu 2020-08-13 01:14PM
It's partly that, but I've also been to talks where people working in the field have argued that they need comprehensible AI to be lawsuit proof.....
andyxl Thu 2020-08-13 01:15PM
Interesting. Basically, AI has to learn to speak human
Chris Lintott Thu 2020-08-13 01:17PM
Yeah - a good example is the awful case where a self-driving car hit a pedestrian. The explanations given were things like 'they stepped out suddenly' - which is a human explanation - but they couldn't be rooted in data.
Dave Morris Thu 2020-08-13 01:18PM
What if the doctor and the AI are two separate legal entities.
Dave Morris Thu 2020-08-13 01:18PM
The NHS has contracted out retinal scanning for diabetic patients to a commercial company.
Dave Morris Thu 2020-08-13 01:18PM
http://www.devondesp.co.uk/
andyxl Thu 2020-08-13 01:19PM
Hmmm. And scientists are a special subset. We have standards of rigour, repeatability, objectivity, but mostly its about being able to communicate with and convince sceptical peers
Dave Morris Thu 2020-08-13 01:19PM
" The Devon Diabetic Eye Screening Programme is provided by Health Intelligence Ltd"
Dave Morris Thu 2020-08-13 01:20PM
The commercial company gets paid for providing 'advice' to doctors.
Dave Morris Thu 2020-08-13 01:20PM
So they get access to all the data to train their AI models, become the world authority on retinal scans ... but have none of the liability.
Hiranya Peiris Thu 2020-08-13 01:21PM
There is increasing work on interpretable and explainable deep learning and the related topic of "knowledge extraction" in the machine learning field. The right to an explanation is a requirement in EU Law so the use of AI in medicine and law will need these developments. I have been trying out some related ideas in relation to understanding structure formation and think there are promising directions there.
andyxl Thu 2020-08-13 01:26PM
Sounds very promising Hiranya.
Hiranya Peiris Thu 2020-08-13 01:27PM
Here is a paper with some toy examples: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.124.010508
Hiranya Peiris Thu 2020-08-13 01:27PM
(it is much harder for real examples, we find :wink: )
Dave Morris Thu 2020-08-13 01:28PM
If a human doctor makes a diagnosis, how would they justify their decision ?
"I've looked at the data, and based on my experience I think this [data point] and this [data point] suggest [diagnosis]"
Dave Morris Thu 2020-08-13 01:28PM
Is it different if the AI highlighted the data points for them ?
andyxl Thu 2020-08-13 01:29PM
I like the term "machine assisted scientific discovery. This is precisley it, as opposed to AI doing the scientific discovery.
andyxl Thu 2020-08-13 01:29PM
i.e. in that abstract you just linked Hiranya
Hiranya Peiris Thu 2020-08-13 01:30PM
Here is another paper that I think is very interesting (though I don't agree with everything in it): https://arxiv.org/pdf/1806.00069.pdf
andyxl Thu 2020-08-13 01:31PM
@Dave Morris There are no hard and fast rules. A doctor has to be able at least in principle to convince their peers, just like scientists. Science is really all about community :slightly_smiling_face:
andyxl Thu 2020-08-13 01:31PM
Oh better get to the brokers session!
William O'Mullane Thu 2020-08-13 12:54PM
:clap:
Matthew (he/his) Thu 2020-08-13 12:54PM
:clap:
Lucianne Walkowicz Thu 2020-08-13 12:54PM
:clap: :clap: :clap:
Emmanuel Gangler Thu 2020-08-13 12:54PM
:clap:
Jeno Sokoloski Thu 2020-08-13 12:54PM
:clap:
Isidora Jankov Thu 2020-08-13 12:54PM
:clap:
Huan Lin Thu 2020-08-13 12:55PM
:clap:
Samuel Schmidt Thu 2020-08-13 12:55PM
:clap:
Alison Rose Thu 2020-08-13 12:55PM
What decisions would Google / SpaceX / facebook make if they operated like science collaborations - like CERN, like DESC - with the codes of conduct and the concern for DEI? What would they be developing? Would the focus be on resources directed to "manned space flight" to Mars? :clap:
Merlin Thu 2020-08-13 12:56PM
They'd call it "crewed space flight" for a start :slightly_smiling_face:
Steve Ritz Thu 2020-08-13 12:56PM
And why don't we insist Deans of business schools include such things in the regular curriculum?
Merlin Thu 2020-08-13 12:57PM
Good question though!
Merlin Thu 2020-08-13 12:57PM
Likely it's too paradoxical to answer though - if they did, they wouldn't exist in anything recognisable as their current form, thus undermining the original question.
Merlin Thu 2020-08-13 12:58PM
Paging @Adam Thornton on whether you could have a company like these, operating like that.
mrawls Thu 2020-08-13 12:58PM
they'd spend a lot more of their time writing grants to try to get funds...
Lucianne Walkowicz Thu 2020-08-13 12:58PM
It's my impression that Google/SpaceX/FB think they are operating that way-- just as most academic departments also think they care about DEI-- and yet, all of the above continue to hire white people almost exclusively
Adam Thornton Thu 2020-08-13 12:58PM
I've been sitting on my hands here because I'm pretty sure I can't speak my thoughts without committing multiple CoC violations.
Adam Thornton Thu 2020-08-13 01:00PM
Anything I'd say spirals into "first, we need to stop pretending it's anything other than white supremacy that makes up the foundation of American capitalism and society" by like the third sentence and gets angrier from there.
Lucianne Walkowicz Thu 2020-08-13 01:01PM
^ THIS A MILLION TIMES
Alison Rose Thu 2020-08-13 01:09PM
What if Rubin moved off google docs, and youtube? videos could go on Vimeo, and be shared via the web site. What are the alternatives to Google Docs... and gmail? I'm trying to leave Google and I feel like Google's the kind of partner that doesn't let you leave; that I don't have autonomy to make that choice, and that is I think the root of the problem.
Hsin-Fang Chiang Thu 2020-08-13 01:10PM
Re: including such things in the regular curriculum, it reminded me of this news: https://www2.calstate.edu/csu-system/news/Pages/CSU-Trustees-Approve-Ethnic-Studies-and-Social-Justice-General-Education-Requirement.aspx
Adam Thornton Thu 2020-08-13 01:11PM
Apple is better than Google is better than Facebook, but that's a ranking like Shingles are better than COVID-19 is better than Ebola, so.....
Merlin Thu 2020-08-13 01:11PM
Google Docs are not an official part of Rubin workflow, at least in DM, and to my knowledge (not that they're entirely unused). For that kind of thing Confluence (and Docushare) is our go-to (not that I have any love for Confluence myself)
Merlin Thu 2020-08-13 01:11PM
Yeah, I've had shingles, and it was pretty whack tbh.
Adam Thornton Thu 2020-08-13 01:12PM
I mean, for the Rubin Observatory stuff--it IS a big public project, and, like, none of it should be secret, so if Google wants to index it and then try to sell us things based on our, uh, astronomical specialties??, I'm not too worried by it.
Adam Thornton Thu 2020-08-13 01:14PM
There's a much stronger argument against using ... well, cloud providers, by which we really just mean "other people's computing infrastructure," for things you have a reason to keep from being public. Like, personal Gmail was a terrible idea but I'm in way too deep for it to be worth my while to reverse it.
Adam Thornton Thu 2020-08-13 01:15PM
(but since I'm unlikely to be ordering assassinations or buying 50 kilos of heroin over e-mail, to what degree do I really care? Yeah, you can mine it and find out I'm a late-40s single nerd who cares distressingly much about ancient computers, and I like dogs)
Lucianne Walkowicz Thu 2020-08-13 01:16PM
@Alison Rose to your question (and of interest to others here): https://switching.software/
Lucianne Walkowicz Thu 2020-08-13 01:17PM
but part of the problem is that only tiny pieces of the solutions lie in individual choices
Lucianne Walkowicz Thu 2020-08-13 01:17PM
Like, if you don't want to use Amazon Web Services, good luck using literally any aspect of the internet
Alison Rose Thu 2020-08-13 01:18PM
yes but, adam it's like China: the Canadian government said that if we trade with China, we will have leveage to persuade China to become more democratic. Now all our supply chains go through China, Canada invoked an extradition request from the US, and while the Chinese exec waits out her extradition in her mansion in Vancouver, China is torturing two Anglo saxon Canadians slowly to death, and sentencing Chinese Canadians to death that it has arrested in China (on drug charges.) We have no leverage, (until we redevelop other supply chains). And instead China wants us to become more like it: less like an SC; more like a totalitarian regime. To what cosmic end, I cannot seem to imagine. It's also like gmail and google docs is how users are groomed by Google to be exploited.
Adam Thornton Thu 2020-08-13 01:21PM
Yeah, but what you're indicting here is terminal capitalism . It's neoliberal globalization. I do not see any peaceful solutions to the coming climate and economic apocalypses.
Russ Allbery Thu 2020-08-13 01:22PM
Right, but if you don't use those services, does it really make a difference? I'm moderately skeptical that we can solve large-scale social, governmental, and regulatory problems via individual choice. I think this too often ends up going in the direction of "just don't buy meat from the butchers with unsafe practices." (a) You don't know which have unsafe practices unless you have a strong regulatory state, (b) they have many ways of lying to you (there is a substantial information inequality), and (c) it's incredibly hard to maintain a sufficient level of regulation via atomized individual effort against personal interest.
Lucianne Walkowicz Thu 2020-08-13 01:23PM
^ yes, this was my point above
Alison Rose Thu 2020-08-13 01:23PM
Yes, I agree: what can we do then? I have an idea. What are your ideas?
Russ Allbery Thu 2020-08-13 01:23PM
Yeah, you made it much more succinctly, too, sorry!
Lucianne Walkowicz Thu 2020-08-13 01:24PM
No worries!
Alison Rose Thu 2020-08-13 01:24PM
What a great way to spend the break.
Adam Thornton Thu 2020-08-13 01:24PM
My ideas involve a lot of things that, if I were to mention them, would most definitely be against any sane Code of Conduct.
Russ Allbery Thu 2020-08-13 01:25PM
I think it's a political problem and it has to have a political solution. I was just listening to a podcast today with the author of The Price of Peace , a biography of John Maynard Keynes, and one of the points he was making is that Keynes was willing to ask questions like "what is an economy for ". I think we've lost track of that; we're too focused on jobs and profits and consumption and making things and aren't asking larger questions philosophically and politically and socially.
Adam Thornton Thu 2020-08-13 01:26PM
Let me just say (as a completely pointless and irrelevant digression with nothing to do at all with anything here) I have this hanging in my kitchen.

https://jacobinmag.com/store/product/1
Lucianne Walkowicz Thu 2020-08-13 01:28PM
@Alison Rose I think the answer to "what are your ideas" is heavily dependent on what your intended course of action is and how you want to take action, because "dismantle white supremacist capitalism" is a very sweeping task. One top level suggestion I have tried making (e.g. here https://notnotrocketscience.substack.com/p/who-owns-the-greater-good?r=bkt2&utm_campaign=post&utm_medium=web&utm_source=copy ) is that we physics/astronomy professionals tend to take action only in our personal spheres, so a major step for a lot of folks would be to go further than thinking about oppression within their projects, or even within the academy or STEM more broadly, when many of these issues most profoundly impact people who don't have access to the spaces we move in
Russ Allbery Thu 2020-08-13 01:31PM
My personal solutions are mostly in the direction of trying to bring the profit motive back into balance with other goals. Can we give people more leisure instead of more profit? We can start with more vacation. Can we give people a better personal life balance? We can start with better family leave policies, equal family leave for men and for women, stronger protection for career paths during family leave. Can we share the products of the economy more equally? Lower wages for high-level managers, higher wages for the least-well-paid people in an organization, less wage discrepancy between the top and bottom. Etc.
andyxl Thu 2020-08-13 12:55PM
:+1:
rojofija Thu 2020-08-13 12:56PM
Can AI be trained to avoid to be hacked in automatic systems such as self-driving cars, drones, etc...?
Jgates Thu 2020-08-13 01:00PM
I think you can teach AI to recognize non-normal behavior and work with that.
Robert Nikutta Thu 2020-08-13 01:04PM
Agreed. But as for "unhackability"... Given a bigger/richer adversary than your org, it can't be 100% secured.
Jim Annis Thu 2020-08-13 12:58PM
Think about putting this talk up publicly- outside the Rubin community.
Federica Bianco (she/her/hers) Thu 2020-08-13 01:02PM
i think the talks are going on youtube, so it should be!
Federica Bianco (she/her/hers) Thu 2020-08-13 01:02PM
(e.g. the RRB)
Jim Annis Thu 2020-08-13 01:03PM
Fantastic. I'll bet Brian's talk will be of great interest.
mrawls Thu 2020-08-13 01:08PM
The YouTube links for all the talks are presently unlisted, we should def check with Brian before publicizing this since it was intended for us as an audience, but I wholeheartedly support sharing it far and wide if he is OK with it!
Jim Annis Thu 2020-08-13 01:09PM
Yes!
Jkantor Thu 2020-08-13 12:58PM
:clap:
Rbiswas4 Thu 2020-08-13 12:58PM
:clap:
Andjelka Thu 2020-08-13 01:01PM
:clap:
Benne Holwerda Thu 2020-08-13 01:01PM
Yesterday, I was told that the surveilance software for online testing, flagged students of color more for cheating... showing how that happens is critical I think. But there is a gap between a off feeling and actual substantiated proof.
mrawls Thu 2020-08-13 01:07PM
There's a line here between believing people's experiences ("I was flagged for cheating but I wasn't cheating") and requiring rigorous statistics before making a change ("until you prove there is a bias we will continue as before"). I would also strongly advocate against using such tools in the first place - see, e.g., https://spark.adobe.com/page/iufSUbZZFJm5H/
Luis Pineda Thu 2020-08-13 01:02PM
:clap:
Dmills Thu 2020-08-13 01:02PM
:clap:
Steve Ritz Thu 2020-08-13 01:02PM
:clap: :clap: :clap: Thanks, again, Brian!
Adam Thornton Thu 2020-08-13 01:02PM
:clap:
andyxl Thu 2020-08-13 01:02PM
Lovely, bye
Yuanyuan Zhang Thu 2020-08-13 01:02PM
:clap:
Clara E. Martínez-Vázquez Thu 2020-08-13 01:02PM
:clap:
Sara (Rosaria) Bonito Thu 2020-08-13 01:02PM
Thanks :clap:
T Sloan Thu 2020-08-13 01:02PM
:clap:
Benne Holwerda Thu 2020-08-13 01:02PM
:clap:
wvanreeven Thu 2020-08-13 01:02PM
:clap:
Rob Bovill Thu 2020-08-13 01:02PM
:clap:
Gareth Francis Thu 2020-08-13 01:02PM
:clap:
Chuck Claver Thu 2020-08-13 01:03PM
:clap:
Amanda Bauer Thu 2020-08-13 01:06PM
thanks :clap:
Allan Jackson6255 Thu 2020-08-13 01:12PM
:clap:
Tony Tyson Thu 2020-08-13 01:19PM
:clap:
Allan Jackson6255 Thu 2020-08-13 01:49PM
FYI Another reference for ethics in AI.
Rachel Thomas
@math_rachel
Director of USF Center for Applied Data Ethics
@DataInstituteSF
+ co-founder http://fast.ai | deep learning, ethics, math PhD | she/her
Tim Lister Thu 2020-08-13 01:59PM
Also Ruha Benjamin has some very interesting talks on race and technology leading to discrimination e.g. https://www.youtube.com/watch?v=JahO1-saibU
Allan Jackson6255 Thu 2020-08-13 02:03PM
Wow, very powerful - thank you for sharing this, Tim! :100:
Allan Jackson6255 Thu 2020-08-13 01:53PM
Also I mentioned to Melissa Graham that I would post these AI in astronomy recent references:
CalTech Astroinformatics 2019 youtube playlist of 41 videos
https://www.youtube.com/playlist?list=PL8_xPU5epJdcv2L4MzpzNd6gPyq6glmjc

ESO 2019 conference program on AI in Astronomy
https://www.eso.org/sci/meetings/2019/AIA2019/program.html
My favorite (so far) by Giuseppe Longo:
https://www.eso.org/sci/meetings/2019/AIA2019/PDF/Invited/Longo.pdf

Full recordings of the Stanford AIMI Symposium on #AI in Medicine & Imaging are now available!
https://aimi.stanford.edu/news-events/aimi-symposium/agenda
iamstarnord (he/him) Thu 2020-08-13 03:00PM
https://ainowinstitute.org/
Orlando Mendez Thu 2020-08-13 03:16PM
Maybe in a bit broader scope, is worth mentioning the https://futureoflife.org/ ?
Ranpal (she/her/hers) Thu 2020-08-13 07:33PM
The recording of the live session is here: https://youtu.be/4FxyN8OD8hs