Discussion:
[Beowulf] More about those underwater data centers
Lux, Jim (337K) via Beowulf
2018-11-03 18:27:05 UTC
Permalink
https://arstechnica.com/gadgets/2018/11/satya-nadella-the-cloud-is-going-to-move-underwater/

I was amused by this:
He cites proximity to humans as a particular advantage: about 50 percent of the world's population lives within 120 miles of a coast. Putting servers in the ocean means that they can be near population centers, which in turn ensures lower latencies. Low latencies are particularly important for real-time services, including Microsoft's forthcoming https://arstechnica.com/gadgets/2018/10/microsoft-announces-project-xcloud-xbox-game-streaming-for-myriad-devices/.
He cites proximity to humans as a particular advantage: about 50 percent of the world's population lives within 120 miles of a coast. Putting servers in the ocean means that they can be near population centers, which in turn ensures lower latencies. Low latencies are particularly important for real-time services, including Microsoft's forthcoming https://arstechnica.com/gadgets/2018/10/microsoft-announces-project-xcloud-xbox-game-streaming-for-myriad-devices/.

I’m not sure there’s a huge population of Xcloud-Xbox gamers in Orkney. There's not much daylight this time of year, of course, so maybe that's what those Orcadians are up to.

And I believe that 100% of the UK's population lives within 120 miles of a coast. ("coast" gets around the often contentious discussion of where the "sea" starts in the face of tidal estuaries and tidal flats - I was struck by the sheer volume of discussion related to "what point in the UK is farthest from the sea")

--

_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beo
Stu Midgley
2018-11-04 13:40:12 UTC
Permalink
putting data centres in the ocean is complete rubbish.

The most stupid and expensive exercise I've ever heard of.

Have they heard of a pump?

Heat-rejection to sea water is a good idea (rather than air) and will be a
whole heap more efficient... but you don't need to submerge your DC. They
still need to get the heat off the cpu's to the sea... that's where all
the real issues are.



On Sun, Nov 4, 2018 at 2:28 AM Lux, Jim (337K) via Beowulf <
Post by Lux, Jim (337K) via Beowulf
https://arstechnica.com/gadgets/2018/11/satya-nadella-the-cloud-is-going-to-move-underwater/
He cites proximity to humans as a particular advantage: about 50 percent
of the world's population lives within 120 miles of a coast. Putting
servers in the ocean means that they can be near population centers, which
in turn ensures lower latencies. Low latencies are particularly important
for real-time services, including Microsoft's forthcoming
https://arstechnica.com/gadgets/2018/10/microsoft-announces-project-xcloud-xbox-game-streaming-for-myriad-devices/
.
He cites proximity to humans as a particular advantage: about 50 percent
of the world's population lives within 120 miles of a coast. Putting
servers in the ocean means that they can be near population centers, which
in turn ensures lower latencies. Low latencies are particularly important
for real-time services, including Microsoft's forthcoming
https://arstechnica.com/gadgets/2018/10/microsoft-announces-project-xcloud-xbox-game-streaming-for-myriad-devices/
.
I’m not sure there’s a huge population of Xcloud-Xbox gamers in Orkney.
There's not much daylight this time of year, of course, so maybe that's
what those Orcadians are up to.
And I believe that 100% of the UK's population lives within 120 miles of a
coast. ("coast" gets around the often contentious discussion of where the
"sea" starts in the face of tidal estuaries and tidal flats - I was struck
by the sheer volume of discussion related to "what point in the UK is
farthest from the sea")
--
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Dr Stuart Midgley
***@gmail.com
Gerald Henriksen
2018-11-04 14:10:23 UTC
Permalink
I’m not sure there’s a huge population of Xcloud-Xbox gamers in Orkney. There's not much daylight this time of year, of course, so maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.

In a way this is just an extension of the idea/product Sun came up wth
where they put a datacentre in a shipping container with the idea that
you could quickly get the datacentre where it was needed.

While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the cost
of real estate in many/most places people live these days. Do you,
for one example, want to pay NYC rents or just throw a bunch of pods
in the Hudson?

I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware fail in
place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/
John Hearns via Beowulf
2018-11-04 16:13:29 UTC
Permalink
Gerald refers to the web scale datacentres, where the door is shut and
servers just fail, till this exceeds a certain threshold.
I would move this discussion on - the initial guarantee for HPC servers is
three years, with many customers in the UK Assking for a five year year
support plan. After this the servers are disposed. of...
Have we faced up to the environmental impact of this?
It is often said that CPUs can be upgraded - I have only once seen an
upgrade in place in my career. By the time a couple of years have elapsed
you are better going for the next generation of servers.
I dont have a magic wand to wave to solve this problem, but it is something
we should be thinking about.
Post by Lux, Jim (337K) via Beowulf
Post by Lux, Jim (337K) via Beowulf
I’m not sure there’s a huge population of Xcloud-Xbox gamers in Orkney.
There's not much daylight this time of year, of course, so maybe that's
what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun came up wth
where they put a datacentre in a shipping container with the idea that
you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the cost
of real estate in many/most places people live these days. Do you,
for one example, want to pay NYC rents or just throw a bunch of pods
in the Hudson?
I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware fail in
place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
Chris Samuel
2018-11-04 22:06:54 UTC
Permalink
Post by John Hearns via Beowulf
Have we faced up to the environmental impact of this?
Where I've been has always tried to reuse/resell/recycle systems. Our
alphacluster was snapped up by folks in the US, our first Intel cluster went
to another university, our Power5 cluster went somewhere I can't remember. At
${JOB-1} we had Intel clusters redirected to other parts of the university
(including one to the LHC ATLAS folks there). BlueGene - well not so much as
far as I could tell. :-(
Post by John Hearns via Beowulf
It is often said that CPUs can be upgraded - I have only once seen an
upgrade in place in my career.
Only been offered (and did) this once about a decade ago at ${JOB-2} where we
upgraded a system from dual core to quad core Opteron (Barcelona).

That could have gone.... better... First of all it was all delayed because of
the TLB errata (we eventually got affected chips and ran with the kernel patch
before getting the rev'd silicon) and then we started to see random lock ups.

Turned out (after a lot of chasing) that whilst the mainboard was meant to be
OK the layout in the box meant the RAM next to the CPUs would sometimes
overheat and take down the box. They added some heatsinks to those DIMMs and
the problem went away!

All the best,
Chris
--
Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC


_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/list
j***@eagleeyet.net
2018-11-05 06:27:06 UTC
Permalink
Probably a stupid question here,

What is the advantage of using salty sea water lets say over for example
mineral oil? I have seen on you tube these guys showing that a pc will
still run in a fish tank and all components submerged in mineral oil?
Yes it will be messier to change components but would the use of mineral
oil be more efficient?
Post by Gerald Henriksen
I’m not sure there’s a huge population of Xcloud-Xbox gamers in
Orkney. There's not much daylight this time of year, of course, so
maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun came up wth
where they put a datacentre in a shipping container with the idea that
you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the cost
of real estate in many/most places people live these days. Do you,
for one example, want to pay NYC rents or just throw a bunch of pods
in the Hudson?
I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware fail in
place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailm
Tony Brian Albers
2018-11-05 08:16:35 UTC
Permalink
Salt water is highly corrosive, that's why people use mineral or
silicone oil.

I've heard of people trying to use power-transformer(the ones on
electrical grid substations) oil, but I don't know if it worked.

/tony
Post by j***@eagleeyet.net
Probably a stupid question here,
What is the advantage of using salty sea water lets say over for
example 
mineral oil? I have seen on you tube these guys showing that a pc
will 
still run in a fish tank and all components submerged in mineral
oil? 
Yes it will be messier to change components but would the use of
mineral 
oil be more efficient?
Post by Gerald Henriksen
I’m not sure there’s a huge population of Xcloud-Xbox gamers in 
Orkney.  There's not much daylight this time of year, of course,
so 
maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun came up wth
where they put a datacentre in a shipping container with the idea that
you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the cost
of real estate in many/most places people live these days.  Do you,
for one example, want to pay NYC rents or just throw a bunch of pods
in the Hudson?
I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware fail in
place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit http:/
/www.beowulf.org/mailman/listinfo/beowulf
--
-- 
Tony Albers
Systems Architect
Systems Director, National Cultural Heritage Cluster
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark.
Tel: +45 2566 2383 / +45 8946 2316
_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowu
Benson Muite
2018-11-05 08:30:40 UTC
Permalink
Some power transformer oils do work - but performance measurements,
performance/price evaluation and long term system durability studies
seem to be lacking.
Post by Tony Brian Albers
Salt water is highly corrosive, that's why people use mineral or
silicone oil.
I've heard of people trying to use power-transformer(the ones on
electrical grid substations) oil, but I don't know if it worked.
/tony
Post by j***@eagleeyet.net
Probably a stupid question here,
What is the advantage of using salty sea water lets say over for
example 
mineral oil? I have seen on you tube these guys showing that a pc
will 
still run in a fish tank and all components submerged in mineral
oil? 
Yes it will be messier to change components but would the use of
mineral 
oil be more efficient?
Post by Gerald Henriksen
I’m not sure there’s a huge population of Xcloud-Xbox gamers in 
Orkney.  There's not much daylight this time of year, of course,
so 
maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun came up wth
where they put a datacentre in a shipping container with the idea that
you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the cost
of real estate in many/most places people live these days.  Do you,
for one example, want to pay NYC rents or just throw a bunch of pods
in the Hudson?
I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware fail in
place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http:/
/www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe)
Lux, Jim (337K) via Beowulf
2018-11-05 18:29:00 UTC
Permalink
It works, as do cooling liquids like Fluorinert - Oil is a few dollars a gallon, Fluorinert is a few hundred dollars/gallon. Fluorinert is "cleaner", and you can do some interesting things with "ebuliient" cooling (i.e. boiling).

FLuorinert is also, as the name implies, much more inert than oil. Oil is a fairly good solvent for some things, so you have to pay attention to what's on your boards. Nothing much dissolves in Fluorinert, other than gases (you've all seen the mouse breathing "underwater")


Jim Lux
(818)354-2075 (office)
(818)395-2714 (cell)


-----Original Message-----
From: Beowulf [mailto:beowulf-***@beowulf.org] On Behalf Of Tony Brian Albers
Sent: Monday, November 05, 2018 12:17 AM
To: ***@gmail.com; ***@eagleeyet.net
Cc: ***@beowulf.org
Subject: Re: [Beowulf] More about those underwater data centers

Salt water is highly corrosive, that's why people use mineral or silicone oil.

I've heard of people trying to use power-transformer(the ones on electrical grid substations) oil, but I don't know if it worked.

/tony
Post by j***@eagleeyet.net
Probably a stupid question here,
What is the advantage of using salty sea water lets say over for
example mineral oil? I have seen on you tube these guys showing that a
pc will still run in a fish tank and all components submerged in
mineral oil?
Yes it will be messier to change components but would the use of
mineral oil be more efficient?
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
I’m not sure there’s a huge population of Xcloud-Xbox gamers in
Orkney.  There's not much daylight this time of year, of course,
so maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun came up
wth where they put a datacentre in a shipping container with the
idea that you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the
cost of real estate in many/most places people live these days.  Do
you, for one example, want to pay NYC rents or just throw a bunch of
pods in the Hudson?
I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware fail
in place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
Computing To change your subscription (digest mode or unsubscribe)
visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Computing To change your subscription (digest mode or unsubscribe)
visit http:/ /www.beowulf.org/mailman/listinfo/beowulf
--
--
Tony Albers
Systems Architect
Systems Director, National Cultural Heritage Cluster Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark.
Tel: +45 2566 2383 / +45 8946 2316
_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
John Hearns via Beowulf
2018-11-05 09:26:30 UTC
Permalink
Jonathan, Stu Midgley will be along in a minute. He can tell us all about
immersion cooling!
A friend of mine operates another type of cluster at a UK lab, which is
immersion cooled.
He calls it the Deep Fat Fryer. I shall refrain from naming the lab!
Jonathan, the data centre inside the submerged cylinder has air in it.
The sea water is let in through pipes and a heat exchanger is used to
water cool the racks.
I would imagine that the raw seawater is used to conduct heat out of the
cylinder, and pure water is recirculated through the racks.
The servers are not immersed in sea water!
Post by j***@eagleeyet.net
Probably a stupid question here,
What is the advantage of using salty sea water lets say over for example
mineral oil? I have seen on you tube these guys showing that a pc will
still run in a fish tank and all components submerged in mineral oil?
Yes it will be messier to change components but would the use of mineral
oil be more efficient?
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
I’m not sure there’s a huge population of Xcloud-Xbox gamers in
Orkney. There's not much daylight this time of year, of course, so
maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun came up wth
where they put a datacentre in a shipping container with the idea that
you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the cost
of real estate in many/most places people live these days. Do you,
for one example, want to pay NYC rents or just throw a bunch of pods
in the Hudson?
I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware fail in
place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
Stu Midgley
2018-11-05 11:05:02 UTC
Permalink
Immersion cooling just gets the heat off the cpu's more efficiently. You
can hook it up to any heat-rejection system you want (salt water would be
fine with the correct heat exchanger and components).

you "could" fill the whole facility with PAO and circulate it around? I
don't know... it doesn't make sense at all.
Post by John Hearns via Beowulf
Jonathan, Stu Midgley will be along in a minute. He can tell us all about
immersion cooling!
A friend of mine operates another type of cluster at a UK lab, which is
immersion cooled.
He calls it the Deep Fat Fryer. I shall refrain from naming the lab!
--
Dr Stuart Midgley
***@gmail.com
Stu Midgley
2018-11-05 11:02:46 UTC
Permalink
As far as I can tell, they are just using the salt water to reject the heat
to. How they get the heat from the cpu/hot bits to the water is not
clearly stated...

A passive heat exchanger would make energy sense... but would cost a bomb
in engineering... maybe direct fluid cooling (asetek) with a
heat-exchanger to the salt water?

Either way, its stupid. They could just easily pump the cool salt water
from the ocean into a DC, reject heat to it using the same methods... and
pump it back to the ocean. Since no real delta in height, it would be
efficient in energy.

OR... just use a boat...
Post by j***@eagleeyet.net
Probably a stupid question here,
What is the advantage of using salty sea water lets say over for example
mineral oil? I have seen on you tube these guys showing that a pc will
still run in a fish tank and all components submerged in mineral oil?
Yes it will be messier to change components but would the use of mineral
oil be more efficient?
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
I’m not sure there’s a huge population of Xcloud-Xbox gamers in
Orkney. There's not much daylight this time of year, of course, so
maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun came up wth
where they put a datacentre in a shipping container with the idea that
you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the cost
of real estate in many/most places people live these days. Do you,
for one example, want to pay NYC rents or just throw a bunch of pods
in the Hudson?
I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware fail in
place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Dr Stuart Midgley
***@gmail.com
Prentice Bisbal via Beowulf
2018-11-05 16:00:19 UTC
Permalink
Prentice
Post by Stu Midgley
As far as I can tell, they are just using the salt water to reject the
heat to.  How they get the heat from the cpu/hot bits to the water is
not clearly stated...
A passive heat exchanger would make energy sense... but would cost a
bomb in engineering...  maybe direct fluid cooling (asetek) with a
heat-exchanger to the salt water?
Either way, its stupid.  They could just easily pump the cool salt
water from the ocean into a DC, reject heat to it using the same
methods... and pump it back to the ocean. Since no real delta in
height, it would be efficient in energy.
The issue with this would be the increased maintenance cost of the
equipment pumping the salt water to the the DC, do to the corrosion from
the salt water, and overall 'dirtiness' of the saltwater. A better
approach would be to have a closed loop of treated freshwater going from
the data center to the a heat exchanger submerged in the sea. This
should reduce maintenance costs for the system.

Honestly, though, I think most of this is moot. With direct-contact
liquid cooling and warm-water cooling, I think for most data centers,
cooling to ambient air should be adequate. For places where that isn't
enough, I would think a shallow, man-made cooling pond on premises would
be an adequate heat sink, without having to go all the way to the ocean.
By keeping it shallow, at night when it cools off, the pond could dump a
lot of its heat to the atmosphere.
Post by Stu Midgley
OR... just use a boat...
Probably a stupid question here,
What is the advantage of using salty sea water lets say over for example
mineral oil? I have seen on you tube these guys showing that a pc will
still run in a fish tank and all components submerged in mineral oil?
Yes it will be messier to change components but would the use of mineral
oil be more efficient?
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
I’m not sure there’s a huge population of Xcloud-Xbox gamers in
Orkney.  There's not much daylight this time of year, of
course, so
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun came
up wth
Post by Gerald Henriksen
where they put a datacentre in a shipping container with the
idea that
Post by Gerald Henriksen
you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the
cost
Post by Gerald Henriksen
of real estate in many/most places people live these days.  Do you,
for one example, want to pay NYC rents or just throw a bunch of pods
in the Hudson?
I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware
fail in
Post by Gerald Henriksen
place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
John Hearns via Beowulf
2018-11-05 18:35:03 UTC
Permalink
Post by Prentice Bisbal via Beowulf
Honestly, though, I think most of this is moot. With direct-contact
liquid cooling and warm-water cooling, I think for most data centers,
cooling to ambient air should be adequate. For >places where that isn't
enough, I would think a shallow, man-made cooling pond on premises would be
an adequate heat sink, without having to go all the way to the ocean. By
keeping >it shallow, at night when it cools off, the pond could dump a lot
of its heat to the atmosphere.

Something like this perhaps?






On Mon, 5 Nov 2018 at 16:01, Prentice Bisbal via Beowulf <
Post by Prentice Bisbal via Beowulf
Prentice
As far as I can tell, they are just using the salt water to reject the
heat to. How they get the heat from the cpu/hot bits to the water is not
clearly stated...
A passive heat exchanger would make energy sense... but would cost a bomb
in engineering... maybe direct fluid cooling (asetek) with a
heat-exchanger to the salt water?
Either way, its stupid. They could just easily pump the cool salt water
from the ocean into a DC, reject heat to it using the same methods... and
pump it back to the ocean. Since no real delta in height, it would be
efficient in energy.
The issue with this would be the increased maintenance cost of the
equipment pumping the salt water to the the DC, do to the corrosion from
the salt water, and overall 'dirtiness' of the saltwater. A better approach
would be to have a closed loop of treated freshwater going from the data
center to the a heat exchanger submerged in the sea. This should reduce
maintenance costs for the system.
Honestly, though, I think most of this is moot. With direct-contact liquid
cooling and warm-water cooling, I think for most data centers, cooling to
ambient air should be adequate. For places where that isn't enough, I would
think a shallow, man-made cooling pond on premises would be an adequate
heat sink, without having to go all the way to the ocean. By keeping it
shallow, at night when it cools off, the pond could dump a lot of its heat
to the atmosphere.
OR... just use a boat...
Post by j***@eagleeyet.net
Probably a stupid question here,
What is the advantage of using salty sea water lets say over for example
mineral oil? I have seen on you tube these guys showing that a pc will
still run in a fish tank and all components submerged in mineral oil?
Yes it will be messier to change components but would the use of mineral
oil be more efficient?
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
I’m not sure there’s a huge population of Xcloud-Xbox gamers in
Orkney. There's not much daylight this time of year, of course, so
maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun came up wth
where they put a datacentre in a shipping container with the idea that
you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the cost
of real estate in many/most places people live these days. Do you,
for one example, want to pay NYC rents or just throw a bunch of pods
in the Hudson?
I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware fail in
place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
Prentice Bisbal via Beowulf
2018-11-05 20:45:44 UTC
Permalink
Yes. Something exactly like that! Is that what that pond is used for? I
would expect that is much larger than what is needed for a typical data
center.

Prentice
Post by Prentice Bisbal via Beowulf
Post by Prentice Bisbal via Beowulf
Honestly, though, I think most of this is moot. With direct-contact
liquid cooling and warm-water cooling, I think for most data centers,
cooling to ambient air should be adequate. For  >places where that
isn't enough, I would think a shallow, man-made cooling pond on
premises would be an adequate heat sink, without having to go all the
way to the ocean. By keeping >it shallow, at night when it cools off,
the pond could dump a lot of its heat to the atmosphere.
Something like this perhaps?
http://youtu.be/0gCXfWCLZAA
On Mon, 5 Nov 2018 at 16:01, Prentice Bisbal via Beowulf
Prentice
Post by Prentice Bisbal via Beowulf
As far as I can tell, they are just using the salt water to
reject the heat to.  How they get the heat from the cpu/hot bits
to the water is not clearly stated...
A passive heat exchanger would make energy sense... but would
cost a bomb in engineering...  maybe direct fluid cooling
(asetek) with a heat-exchanger to the salt water?
Either way, its stupid.  They could just easily pump the cool
salt water from the ocean into a DC, reject heat to it using the
same methods... and pump it back to the ocean.  Since no real
delta in height, it would be efficient in energy.
The issue with this would be the increased maintenance cost of the
equipment pumping the salt water to the the DC, do to the
corrosion from the salt water, and overall 'dirtiness' of the
saltwater. A better approach would be to have a closed loop of
treated freshwater going from the data center to the a heat
exchanger submerged in the sea. This should reduce maintenance
costs for the system.
Honestly, though, I think most of this is moot. With
direct-contact liquid cooling and warm-water cooling, I think for
most data centers, cooling to ambient air should be adequate. For
places where that isn't enough, I would think a shallow, man-made
cooling pond on premises would be an adequate heat sink, without
having to go all the way to the ocean. By keeping it shallow, at
night when it cools off, the pond could dump a lot of its heat to
the atmosphere.
Post by Prentice Bisbal via Beowulf
OR... just use a boat...
Probably a stupid question here,
What is the advantage of using salty sea water lets say over for example
mineral oil? I have seen on you tube these guys showing that a pc will
still run in a fish tank and all components submerged in mineral oil?
Yes it will be messier to change components but would the use of mineral
oil be more efficient?
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
I’m not sure there’s a huge population of Xcloud-Xbox
gamers in
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
Orkney.  There's not much daylight this time of year, of
course, so
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun
came up wth
Post by Gerald Henriksen
where they put a datacentre in a shipping container with
the idea that
Post by Gerald Henriksen
you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do
build a
Post by Gerald Henriksen
traditional data centre (mentioned in the article), but
even the cost
Post by Gerald Henriksen
of real estate in many/most places people live these days. 
Do you,
Post by Gerald Henriksen
for one example, want to pay NYC rents or just throw a
bunch of pods
Post by Gerald Henriksen
in the Hudson?
I guess once you accept the idea that we no longer maintain
these
Post by Gerald Henriksen
datacentres in the traditional way - we now just let
hardware fail in
Post by Gerald Henriksen
place and ignore it until it's time to replace all the
hardware -
Post by Gerald Henriksen
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe)
visit http://www.beowulf.org/mailman/listinfo/beowulf
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visithttp://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
C Bergström
2018-11-05 20:50:13 UTC
Permalink
Building cooling maybe.. Then again in the UK I doubt the need would be so
strong. The building from aerial view is ying/yang so it's probably just
design

On Tue, Nov 6, 2018 at 4:46 AM Prentice Bisbal via Beowulf <
Post by Prentice Bisbal via Beowulf
Yes. Something exactly like that! Is that what that pond is used for? I
would expect that is much larger than what is needed for a typical data
center.
Prentice
Post by Prentice Bisbal via Beowulf
Honestly, though, I think most of this is moot. With direct-contact
liquid cooling and warm-water cooling, I think for most data centers,
cooling to ambient air should be adequate. For >places where that isn't
enough, I would think a shallow, man-made cooling pond on premises would be
an adequate heat sink, without having to go all the way to the ocean. By
keeping >it shallow, at night when it cools off, the pond could dump a lot
of its heat to the atmosphere.
Something like this perhaps?
http://youtu.be/0gCXfWCLZAA
On Mon, 5 Nov 2018 at 16:01, Prentice Bisbal via Beowulf <
Post by Prentice Bisbal via Beowulf
Prentice
As far as I can tell, they are just using the salt water to reject the
heat to. How they get the heat from the cpu/hot bits to the water is not
clearly stated...
A passive heat exchanger would make energy sense... but would cost a bomb
in engineering... maybe direct fluid cooling (asetek) with a
heat-exchanger to the salt water?
Either way, its stupid. They could just easily pump the cool salt water
from the ocean into a DC, reject heat to it using the same methods... and
pump it back to the ocean. Since no real delta in height, it would be
efficient in energy.
The issue with this would be the increased maintenance cost of the
equipment pumping the salt water to the the DC, do to the corrosion from
the salt water, and overall 'dirtiness' of the saltwater. A better approach
would be to have a closed loop of treated freshwater going from the data
center to the a heat exchanger submerged in the sea. This should reduce
maintenance costs for the system.
Honestly, though, I think most of this is moot. With direct-contact
liquid cooling and warm-water cooling, I think for most data centers,
cooling to ambient air should be adequate. For places where that isn't
enough, I would think a shallow, man-made cooling pond on premises would be
an adequate heat sink, without having to go all the way to the ocean. By
keeping it shallow, at night when it cools off, the pond could dump a lot
of its heat to the atmosphere.
OR... just use a boat...
Post by j***@eagleeyet.net
Probably a stupid question here,
What is the advantage of using salty sea water lets say over for example
mineral oil? I have seen on you tube these guys showing that a pc will
still run in a fish tank and all components submerged in mineral oil?
Yes it will be messier to change components but would the use of mineral
oil be more efficient?
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
I’m not sure there’s a huge population of Xcloud-Xbox gamers in
Orkney. There's not much daylight this time of year, of course, so
maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun came up wth
where they put a datacentre in a shipping container with the idea that
you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the cost
of real estate in many/most places people live these days. Do you,
for one example, want to pay NYC rents or just throw a bunch of pods
in the Hudson?
I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware fail in
place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
C Bergström
2018-11-05 20:51:04 UTC
Permalink
https://en.wikipedia.org/wiki/McLaren_Technology_Centre
Post by C Bergström
Building cooling maybe.. Then again in the UK I doubt the need would be so
strong. The building from aerial view is ying/yang so it's probably just
design
On Tue, Nov 6, 2018 at 4:46 AM Prentice Bisbal via Beowulf <
Post by Prentice Bisbal via Beowulf
Yes. Something exactly like that! Is that what that pond is used for? I
would expect that is much larger than what is needed for a typical data
center.
Prentice
Post by Prentice Bisbal via Beowulf
Honestly, though, I think most of this is moot. With direct-contact
liquid cooling and warm-water cooling, I think for most data centers,
cooling to ambient air should be adequate. For >places where that isn't
enough, I would think a shallow, man-made cooling pond on premises would be
an adequate heat sink, without having to go all the way to the ocean. By
keeping >it shallow, at night when it cools off, the pond could dump a lot
of its heat to the atmosphere.
Something like this perhaps?
http://youtu.be/0gCXfWCLZAA
On Mon, 5 Nov 2018 at 16:01, Prentice Bisbal via Beowulf <
Post by Prentice Bisbal via Beowulf
Prentice
As far as I can tell, they are just using the salt water to reject the
heat to. How they get the heat from the cpu/hot bits to the water is not
clearly stated...
A passive heat exchanger would make energy sense... but would cost a
bomb in engineering... maybe direct fluid cooling (asetek) with a
heat-exchanger to the salt water?
Either way, its stupid. They could just easily pump the cool salt water
from the ocean into a DC, reject heat to it using the same methods... and
pump it back to the ocean. Since no real delta in height, it would be
efficient in energy.
The issue with this would be the increased maintenance cost of the
equipment pumping the salt water to the the DC, do to the corrosion from
the salt water, and overall 'dirtiness' of the saltwater. A better approach
would be to have a closed loop of treated freshwater going from the data
center to the a heat exchanger submerged in the sea. This should reduce
maintenance costs for the system.
Honestly, though, I think most of this is moot. With direct-contact
liquid cooling and warm-water cooling, I think for most data centers,
cooling to ambient air should be adequate. For places where that isn't
enough, I would think a shallow, man-made cooling pond on premises would be
an adequate heat sink, without having to go all the way to the ocean. By
keeping it shallow, at night when it cools off, the pond could dump a lot
of its heat to the atmosphere.
OR... just use a boat...
Post by j***@eagleeyet.net
Probably a stupid question here,
What is the advantage of using salty sea water lets say over for example
mineral oil? I have seen on you tube these guys showing that a pc will
still run in a fish tank and all components submerged in mineral oil?
Yes it will be messier to change components but would the use of mineral
oil be more efficient?
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
I’m not sure there’s a huge population of Xcloud-Xbox gamers in
Orkney. There's not much daylight this time of year, of course, so
maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun came up wth
where they put a datacentre in a shipping container with the idea that
you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the cost
of real estate in many/most places people live these days. Do you,
for one example, want to pay NYC rents or just throw a bunch of pods
in the Hudson?
I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware fail in
place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
Prentice Bisbal via Beowulf
2018-11-05 20:55:23 UTC
Permalink
The building is accompanied by a series of artificial lakes: one
formal lake directly opposite that completes the circle of the
building, and a further four 'ecology' lakes. Together they contain
about 50,000 m³ of water. This water is pumped through a series of
heat exchangers <https://en.wikipedia.org/wiki/Heat_exchanger> to cool
the building and to dissipate the heat produced by the wind tunnels.
So this water could definitely be used to cool the data center. I wonder
what that extra heat in the water does to the 'ecology' in those
'ecology lakes'.

Prentice
https://en.wikipedia.org/wiki/McLaren_Technology_Centre
Building cooling maybe.. Then again in the UK I doubt the need
would be so strong. The building from aerial view is ying/yang so
it's probably just design
On Tue, Nov 6, 2018 at 4:46 AM Prentice Bisbal via Beowulf
Yes. Something exactly like that! Is that what that pond is
used for? I would expect that is much larger than what is
needed for a typical data center.
Prentice
Post by Prentice Bisbal via Beowulf
Post by Prentice Bisbal via Beowulf
Honestly, though, I think most of this is moot. With
direct-contact liquid cooling and warm-water cooling, I think
for most data centers, cooling to ambient air should be
adequate. For  >places where that isn't enough, I would think
a shallow, man-made cooling pond on premises would be an
adequate heat sink, without having to go all the way to the
ocean. By keeping >it shallow, at night when it cools off,
the pond could dump a lot of its heat to the atmosphere.
Something like this perhaps?
http://youtu.be/0gCXfWCLZAA
On Mon, 5 Nov 2018 at 16:01, Prentice Bisbal via Beowulf
Prentice
Post by Prentice Bisbal via Beowulf
As far as I can tell, they are just using the salt water
to reject the heat to.  How they get the heat from the
cpu/hot bits to the water is not clearly stated...
A passive heat exchanger would make energy sense... but
would cost a bomb in engineering...  maybe direct fluid
cooling (asetek) with a heat-exchanger to the salt water?
Either way, its stupid.  They could just easily pump the
cool salt water from the ocean into a DC, reject heat to
it using the same methods... and pump it back to the
ocean.  Since no real delta in height, it would be
efficient in energy.
The issue with this would be the increased maintenance
cost of the equipment pumping the salt water to the the
DC, do to the corrosion from the salt water, and overall
'dirtiness' of the saltwater. A better approach would be
to have a closed loop of treated freshwater going from
the data center to the a heat exchanger submerged in the
sea. This should reduce maintenance costs for the system.
Honestly, though, I think most of this is moot. With
direct-contact liquid cooling and warm-water cooling, I
think for most data centers, cooling to ambient air
should be adequate. For places where that isn't enough, I
would think a shallow, man-made cooling pond on premises
would be an adequate heat sink, without having to go all
the way to the ocean. By keeping it shallow, at night
when it cools off, the pond could dump a lot of its heat
to the atmosphere.
Post by Prentice Bisbal via Beowulf
OR... just use a boat...
Probably a stupid question here,
What is the advantage of using salty sea water lets
say over for example
mineral oil? I have seen on you tube these guys
showing that a pc will
still run in a fish tank and all components
submerged in mineral oil?
Yes it will be messier to change components but
would the use of mineral
oil be more efficient?
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
I’m not sure there’s a huge population of
Xcloud-Xbox gamers in
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
Orkney.  There's not much daylight this time of
year, of course, so
Post by Gerald Henriksen
Post by Lux, Jim (337K) via Beowulf
maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the
idea/product Sun came up wth
Post by Gerald Henriksen
where they put a datacentre in a shipping
container with the idea that
Post by Gerald Henriksen
you could quickly get the datacentre where it was
needed.
Post by Gerald Henriksen
While I wouldn't say this won't fail, I think
there is a lot of
Post by Gerald Henriksen
attraction to the concept given not just the time
lag do build a
Post by Gerald Henriksen
traditional data centre (mentioned in the
article), but even the cost
Post by Gerald Henriksen
of real estate in many/most places people live
these days.  Do you,
Post by Gerald Henriksen
for one example, want to pay NYC rents or just
throw a bunch of pods
Post by Gerald Henriksen
in the Hudson?
I guess once you accept the idea that we no longer
maintain these
Post by Gerald Henriksen
datacentres in the traditional way - we now just
let hardware fail in
Post by Gerald Henriksen
place and ignore it until it's time to replace all
the hardware -
Post by Gerald Henriksen
moving to smaller sealed units doesn't seem to
strange.
Post by Gerald Henriksen
_______________________________________________
Computing
To change your subscription (digest mode or
unsubscribe) visit
Post by Gerald Henriksen
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Computing
To change your subscription (digest mode or
unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visithttp://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe)
visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visithttp://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
j***@eagleeyet.net
2018-11-05 20:59:47 UTC
Permalink
Forget lakes why not build a moat around the data center and use that
water to cool it and no lake ecologies are touched or harmed in the
cooling of these buildings?
Post by Prentice Bisbal via Beowulf
The building is accompanied by a series of artificial lakes: one
formal lake directly opposite that completes the circle of the
building, and a further four 'ecology' lakes. Together they contain
about 50,000 m³ of water. This water is pumped through a series of
heat exchangers [1] to cool the building and to dissipate the heat
produced by the wind tunnels.
So this water could definitely be used to cool the data center. I
wonder what that extra heat in the water does to the 'ecology' in
those 'ecology lakes'.
Prentice
https://en.wikipedia.org/wiki/McLaren_Technology_Centre
On Tue, Nov 6, 2018 at 4:50 AM C Bergström
Building cooling maybe.. Then again in the UK I doubt the need would
be so strong. The building from aerial view is ying/yang so it's
probably just design
On Tue, Nov 6, 2018 at 4:46 AM Prentice Bisbal via Beowulf
Yes. Something exactly like that! Is that what that pond is used
for? I would expect that is much larger than what is needed for a
typical data center.
Prentice
Post by Prentice Bisbal via Beowulf
Honestly, though, I think most of this is moot. With
direct-contact liquid cooling and warm-water cooling, I think for
most data centers, cooling to ambient air should be adequate. For
Post by Prentice Bisbal via Beowulf
places where that isn't enough, I would think a shallow, man-made
cooling pond on premises would be an adequate heat sink, without
having to go all the way to the ocean. By keeping >it shallow, at
night when it cools off, the pond could dump a lot of its heat to
the atmosphere.
Something like this perhaps?
http://youtu.be/0gCXfWCLZAA
On Mon, 5 Nov 2018 at 16:01, Prentice Bisbal via Beowulf
Prentice
As far as I can tell, they are just using the salt water to reject
the heat to. How they get the heat from the cpu/hot bits to the
water is not clearly stated...
A passive heat exchanger would make energy sense... but would cost a
bomb in engineering... maybe direct fluid cooling (asetek) with a
heat-exchanger to the salt water?
Either way, its stupid. They could just easily pump the cool salt
water from the ocean into a DC, reject heat to it using the same
methods... and pump it back to the ocean. Since no real delta in
height, it would be efficient in energy. The issue with this would
be the increased maintenance cost of the equipment pumping the salt
water to the the DC, do to the corrosion from the salt water, and
overall 'dirtiness' of the saltwater. A better approach would be to
have a closed loop of treated freshwater going from the data center
to the a heat exchanger submerged in the sea. This should reduce
maintenance costs for the system.
Honestly, though, I think most of this is moot. With direct-contact
liquid cooling and warm-water cooling, I think for most data
centers, cooling to ambient air should be adequate. For places where
that isn't enough, I would think a shallow, man-made cooling pond on
premises would be an adequate heat sink, without having to go all
the way to the ocean. By keeping it shallow, at night when it cools
off, the pond could dump a lot of its heat to the atmosphere.
OR... just use a boat...
Probably a stupid question here,
What is the advantage of using salty sea water lets say over for example
mineral oil? I have seen on you tube these guys showing that a pc will
still run in a fish tank and all components submerged in mineral oil?
Yes it will be messier to change components but would the use of mineral
oil be more efficient?
Post by Prentice Bisbal via Beowulf
Post by Lux, Jim (337K) via Beowulf
I’m not sure there’s a huge population of Xcloud-Xbox gamers
in
Post by Prentice Bisbal via Beowulf
Post by Lux, Jim (337K) via Beowulf
Orkney. There's not much daylight this time of year, of course,
so
Post by Prentice Bisbal via Beowulf
Post by Lux, Jim (337K) via Beowulf
maybe that's what those Orcadians are up to.
Likely just a convenient place for a second test unit.
In a way this is just an extension of the idea/product Sun came up
wth
Post by Prentice Bisbal via Beowulf
where they put a datacentre in a shipping container with the idea
that
Post by Prentice Bisbal via Beowulf
you could quickly get the datacentre where it was needed.
While I wouldn't say this won't fail, I think there is a lot of
attraction to the concept given not just the time lag do build a
traditional data centre (mentioned in the article), but even the
cost
Post by Prentice Bisbal via Beowulf
of real estate in many/most places people live these days. Do
you,
Post by Prentice Bisbal via Beowulf
for one example, want to pay NYC rents or just throw a bunch of
pods
Post by Prentice Bisbal via Beowulf
in the Hudson?
I guess once you accept the idea that we no longer maintain these
datacentres in the traditional way - we now just let hardware fail
in
Post by Prentice Bisbal via Beowulf
place and ignore it until it's time to replace all the hardware -
moving to smaller sealed units doesn't seem to strange.
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
------
[1] https://en.wikipedia.org/wiki/Heat_exchanger
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/m
Lux, Jim (337K) via Beowulf
2018-11-05 18:34:37 UTC
Permalink
The environment some 10s of meters submerged is significantly more benign than the coast – no waves, howling winds, crashing surf, etc. Once you solved the packaging problem (once), you’ve got a nice module you can replicate and deploy as needed. On land, you need to pay rent, deal with visual obstructions, etc.

Here in California, it would probably be significantly easier to sink something a km offshore than to put it on shore. The “onshore” facilities would just be a fairly innocuous cabling head end. To “build” anything close to the shore (within sight) would require substantial planning permission, regulatory compliance, etc.

I’m not sure what sort of permissions you’d actually need for the submerged data center. Probably something from the Fisheries folks to ensure you’re not disturbing the wildlife. You’d have to deal with an environmental impact report for the shore facility, but that might be straightforward.



Jim Lux
(818)354-2075 (office)
(818)395-2714 (cell)

From: Beowulf [mailto:beowulf-***@beowulf.org] On Behalf Of Stu Midgley
Sent: Monday, November 05, 2018 3:03 AM
To: Jonathan Aquilina <***@eagleeyet.net>
Cc: Beowulf List <***@beowulf.org>
Subject: Re: [Beowulf] More about those underwater data centers

As far as I can tell, they are just using the salt water to reject the heat to. How they get the heat from the cpu/hot bits to the water is not clearly stated...

A passive heat exchanger would make energy sense... but would cost a bomb in engineering... maybe direct fluid cooling (asetek) with a heat-exchanger to the salt water?

Either way, its stupid. They could just easily pump the cool salt water from the ocean into a DC, reject heat to it using the same methods... and pump it back to the ocean. Since no real delta in height, it would be efficient in energy.

OR... just use a boat...
Jonathan Aquilina
2018-11-05 19:32:43 UTC
Permalink
What about running cabling to the shore as well?

Sent from my iPhone
Post by Lux, Jim (337K) via Beowulf
The environment some 10s of meters submerged is significantly more benign than the coast – no waves, howling winds, crashing surf, etc. Once you solved the packaging problem (once), you’ve got a nice module you can replicate and deploy as needed. On land, you need to pay rent, deal with visual obstructions, etc.
Here in California, it would probably be significantly easier to sink something a km offshore than to put it on shore. The “onshore” facilities would just be a fairly innocuous cabling head end. To “build” anything close to the shore (within sight) would require substantial planning permission, regulatory compliance, etc.
I’m not sure what sort of permissions you’d actually need for the submerged data center. Probably something from the Fisheries folks to ensure you’re not disturbing the wildlife. You’d have to deal with an environmental impact report for the shore facility, but that might be straightforward.
Jim Lux
(818)354-2075 (office)
(818)395-2714 (cell)
Sent: Monday, November 05, 2018 3:03 AM
Subject: Re: [Beowulf] More about those underwater data centers
As far as I can tell, they are just using the salt water to reject the heat to. How they get the heat from the cpu/hot bits to the water is not clearly stated...
A passive heat exchanger would make energy sense... but would cost a bomb in engineering... maybe direct fluid cooling (asetek) with a heat-exchanger to the salt water?
Either way, its stupid. They could just easily pump the cool salt water from the ocean into a DC, reject heat to it using the same methods... and pump it back to the ocean. Since no real delta in height, it would be efficient in energy.
OR... just use a boat...
Lux, Jim (337K) via Beowulf
2018-11-05 18:25:22 UTC
Permalink
These things don't actually immerse the computers in sea water.. they use the surrounding water as an "infinite cold sink" to dissipate the heat generated by the computers, which operate in air.

You really do NOT want to run boards immersed in coolant - yeah, there's folks doing it at HPC scale, but they're typically doing something in a volume constrained environment (trying to get 100s of servers in a single rack) or some other environment: high altitude aircraft, where the air is thin; or a very dusty/dirty environment, where you need a "sealed box". I designed a small cluster designed to operate in a dusty environment that was basically a "spray cooling" system in an aluminum case. There was a small pump that picked up the inert fluid, sprayed it everywhere inside the box, including the boards and box sides, providing a simple(ish) thermal transfer between board and external case. It wasn't great, but at least it cooled the whole boards. The typical "liquid CPU cooler" only cools that one chip and depends on airflow for the rest, and you can't move enough heat from board to box shell with air. Maybe helium or hydrogen would work <grin>

Whatever the coolant, it leaks, it oozes, it gets places you don't want it to go. And serviceability is challenging. You need to pull the "wet" boards out, or you need to connect and disconnect fluid connectors, etc. If you're in an environment where you can manage that (or are forced into it by necessity), then you can do it.

A good "intermediate" scheme is liquid/air heat exchangers in close proximity to the electronics, and I believe that's what the "subsea datacenter" scheme uses.



Jim Lux
(818)354-2075 (office)
(818)395-2714 (cell)

-----Original Message-----
From: Beowulf [mailto:beowulf-***@beowulf.org] On Behalf Of ***@eagleeyet.net
Sent: Sunday, November 04, 2018 10:27 PM
To: Gerald Henriksen <***@gmail.com>
Cc: ***@beowulf.org
Subject: Re: [Beowulf] More about those underwater data centers

Probably a stupid question here,

What is the advantage of using salty sea water lets say over for example mineral oil? I have seen on you tube these guys showing that a pc will still run in a fish tank and all components submerged in mineral oil?
Yes it will be messier to change components but would the use of mineral oil be more efficient?

_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http:/
Stu Midgley
2018-11-05 23:30:49 UTC
Permalink
I refute both these claims.

You DO want to run your boards immersed in coolant. It works wonderfully
well, is easy to live with, servicing is easy... and saves you almost 1/2
your power bill.

People are scared of immersion cooling, but it isn't that difficult to live
with. Some things are harder but other things are way easier. In total,
it balances out.

Also, given the greater reliability of components you get, you do less
servicing.

If you haven't lived with it, you really have no idea what you are missing.


Serviceability is NOT challenging.



You really do NOT want to run boards immersed in coolant - yeah, there's
Post by Lux, Jim (337K) via Beowulf
folks doing it at HPC scale
Whatever the coolant, it leaks, it oozes, it gets places you don't want it
to go. And serviceability is challenging. You need to pull the "wet" boards
out, or you need to connect and disconnect fluid connectors, etc. If
you're in an environment where you can manage that (or are forced into it
by necessity), then you can do it.
--
Dr Stuart Midgley
***@gmail.com
Prentice Bisbal via Beowulf
2018-11-06 16:16:58 UTC
Permalink
Post by Lux, Jim (337K) via Beowulf
. And serviceability is challenging. You need to pull the "wet" boards
out, or you need to connect and disconnect fluid connectors, etc.  If
you're in an environment where you can manage that (or are forced into
it by necessity), then you can do it.
I think everyone on this list already knows I'm no fan of mineral oil
immersion (It just seems to messy to me. Sorry, Stu), but immersion
cooling with other liquids, such as 3M Novec engineered fluid addresses
a lot of your concerns. It as a low boiling point, not much above room
temperature, and it was originally meant to be an electronic parts
cleaner (according to a 3M rep at the 3M booth at SC a few years ago, so
if you pull a component out of it, it dries very quickly and should be
immaculately clean.

The low boiling point is an excellent feature for heat transfer, too,
since it boils from the heat of the processor (ebullient cooling). This
change of state absorbs a lot of energy, making it very effective at
transferring heat away from the processor. The vapor can then rise and
condense on a heat exchanger with a chilled water heat exchanger, where
it again transfers a lot of heat through a change of state.

Prentice
Post by Lux, Jim (337K) via Beowulf
I refute both these claims.
You DO want to run your boards immersed in coolant.  It works
wonderfully well, is easy to live with, servicing is easy... and saves
you almost 1/2 your power bill.
People are scared of immersion cooling, but it isn't that difficult to
live with.  Some things are harder but other things are way easier. 
In total, it balances out.
Also, given the greater reliability of components you get, you do less
servicing.
If you haven't lived with it, you really have no idea what you are missing.
Serviceability is NOT challenging.
You really do NOT want to run boards immersed in coolant - yeah,
there's folks doing it at HPC scale
Whatever the coolant, it leaks, it oozes, it gets places you don't
want it to go. And serviceability is challenging. You need to pull
the "wet" boards out, or you need to connect and disconnect fluid
connectors, etc.  If you're in an environment where you can manage
that (or are forced into it by necessity), then you can do it.
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
Lux, Jim (337K) via Beowulf
2018-11-06 19:03:42 UTC
Permalink
True enough.
Ebullient cooling does have some challenges – you can form vapor films, which are good insulators, but if you get the system working right, nothing beats phase changes for a heat transfer.

I’m aware of high power vacuum tubes using ebullient cooling. I’ve seen demonstrations for electronics cooling using it, but I don’t know of any “production” applications.
I’m always skeptical of “first customer” testimonials
 (viz ebullientcooling.com)

There are some studies out there of experimental systems (NREL did one, Minnesota did one, but I think both of those were full immersion)




Jim Lux
(818)354-2075 (office)
(818)395-2714 (cell)

From: Beowulf [mailto:beowulf-***@beowulf.org] On Behalf Of Prentice Bisbal via Beowulf
Sent: Tuesday, November 06, 2018 8:17 AM
To: ***@beowulf.org
Subject: Re: [Beowulf] More about those underwater data centers

. And serviceability is challenging. You need to pull the "wet" boards out, or you need to connect and disconnect fluid connectors, etc. If you're in an environment where you can manage that (or are forced into it by necessity), then you can do it.

I think everyone on this list already knows I'm no fan of mineral oil immersion (It just seems to messy to me. Sorry, Stu), but immersion cooling with other liquids, such as 3M Novec engineered fluid addresses a lot of your concerns. It as a low boiling point, not much above room temperature, and it was originally meant to be an electronic parts cleaner (according to a 3M rep at the 3M booth at SC a few years ago, so if you pull a component out of it, it dries very quickly and should be immaculately clean.

The low boiling point is an excellent feature for heat transfer, too, since it boils from the heat of the processor (ebullient cooling). This change of state absorbs a lot of energy, making it very effective at transferring heat away from the processor. The vapor can then rise and condense on a heat exchanger with a chilled water heat exchanger, where it again transfers a lot of heat through a change of state.

Prentice
On 11/05/2018 06:30 PM, Stu Midgley wrote:
I refute both these claims.

You DO want to run your boards immersed in coolant. It works wonderfully well, is easy to live with, servicing is easy... and saves you almost 1/2 your power bill.

People are scared of immersion cooling, but it isn't that difficult to live with. Some things are harder but other things are way easier. In total, it balances out.

Also, given the greater reliability of components you get, you do less servicing.

If you haven't lived with it, you really have no idea what you are missing.


Serviceability is NOT challenging.



You really do NOT want to run boards immersed in coolant - yeah, there's folks doing it at HPC scale

Whatever the coolant, it leaks, it oozes, it gets places you don't want it to go. And serviceability is challenging. You need to pull the "wet" boards out, or you need to connect and disconnect fluid connectors, etc. If you're in an environment where you can manage that (or are forced into it by necessity), then you can do it.
--
Dr Stuart Midgley
***@gmail.com<mailto:***@gmail.com>




_______________________________________________

Beowulf mailing list, ***@beowulf.org<mailto:***@beowulf.org> sponsored by Penguin Computing

To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
Prentice Bisbal via Beowulf
2018-11-06 22:56:36 UTC
Permalink
Post by Lux, Jim (337K) via Beowulf
True enough.
Ebullient cooling does have some challenges – you can form vapor
films, which are good insulators, but if you get the system working
right, nothing beats phase changes for a heat transfer.
If I recall what I learned in my Transport Phenomena classes in
engineering school, you need a reasonably high temperature difference to
get a stable film like that. For that to happen, radiant heat transfer
needs to be the dominant heat transfer mechanism, in the range of
operation we are talking about, the temperature difference isn't that
great, and conduction is still the dominant form of heat transfer.

Here's an example of what 3M Novec ebullient cooling looks like. It
doesn't look like it's anywhere near the film boiling regime:



--
Prentice
Post by Lux, Jim (337K) via Beowulf
*Prentice Bisbal via Beowulf
*Sent:* Tuesday, November 06, 2018 8:17 AM
*Subject:* Re: [Beowulf] More about those underwater data centers
. And serviceability is challenging. You need to pull the "wet"
boards out, or you need to connect and disconnect fluid
connectors, etc.  If you're in an environment where you can manage
that (or are forced into it by necessity), then you can do it.
I think everyone on this list already knows I'm no fan of mineral oil
immersion (It just seems to messy to me. Sorry, Stu), but immersion
cooling with other liquids, such as 3M Novec engineered fluid
addresses a lot of your concerns. It as a low boiling point, not much
above room temperature, and it was originally meant to be an
electronic parts cleaner (according to a 3M rep at the 3M booth at SC
a few years ago, so if you pull a component out of it, it dries very
quickly and should be immaculately clean.
The low boiling point is an excellent feature for heat transfer, too,
since it boils from the heat of the processor (ebullient cooling).
This change of state absorbs a lot of energy, making it very effective
at transferring heat away from the processor. The vapor can then rise
and condense on a heat exchanger with a chilled water heat exchanger,
where it again transfers a lot of heat through a change of state.
Prentice
I refute both these claims.
You DO want to run your boards immersed in coolant.  It works
wonderfully well, is easy to live with, servicing is easy... and
saves you almost 1/2 your power bill.
People are scared of immersion cooling, but it isn't that
difficult to live with.  Some things are harder but other things
are way easier.  In total, it balances out.
Also, given the greater reliability of components you get, you do less servicing.
If you haven't lived with it, you really have no idea what you are missing.
Serviceability is NOT challenging.
You really do NOT want to run boards immersed in coolant -
yeah, there's folks doing it at HPC scale
Whatever the coolant, it leaks, it oozes, it gets places you
don't want it to go. And serviceability is challenging. You
need to pull the "wet" boards out, or you need to connect and
disconnect fluid connectors, etc.  If you're in an environment
where you can manage that (or are forced into it by
necessity), then you can do it.
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visithttp://www.beowulf.org/mailman/listinfo/beowulf
John Hearns via Beowulf
2018-11-07 09:12:11 UTC
Permalink
Thinking about liquid cooling , and the ebuillient cooling, the main
sources of heat on our current architecture servers are the CPU package and
the voltage regulators. Then the DIMMs.
Concentrating on the CPU die package, it is engineered with a flat metal
surface which is intended to have a thermal paste to transfer heat across
to a flat metal heatsink.
Those heatsinks are finned to have air blown across them to transport the
heat away.

In liquid immersion should we be looking at having a spiky surface on the
CPU die packages and the voltage regulators?
Maybe we should spray the entire board with a 'flocking'' compound and give
it a matt finish!
I am being semi-serious. I guess a lot of CFD simulation done regarding
air cooling with fins.
How much work has gone into pointy surfaces on the die package, which would
increase contact area of course and also act as nucleation points for
bubbles?

One interesting experiment to do - assuming the flat areas of the CPU in an
immersive system do not have (non thermal paste) heatsinks bolted on:
take two systems and roughen up the die package surfacewith sandpaper on
one. Compare temperatures.

ps. I can't resist adding this. Sorry Stu .

I guess Kenneth Williams is a typical vendor Site Engineer.
pps. the actress in the redress had her career ruined by this film - she
ver got a serious role again after perfectly being typecast.







On Tue, 6 Nov 2018 at 22:57, Prentice Bisbal via Beowulf <
Post by Lux, Jim (337K) via Beowulf
True enough.
Ebullient cooling does have some challenges – you can form vapor films,
which are good insulators, but if you get the system working right, nothing
beats phase changes for a heat transfer.
If I recall what I learned in my Transport Phenomena classes in
engineering school, you need a reasonably high temperature difference to
get a stable film like that. For that to happen, radiant heat transfer
needs to be the dominant heat transfer mechanism, in the range of operation
we are talking about, the temperature difference isn't that great, and
conduction is still the dominant form of heat transfer.
Here's an example of what 3M Novec ebullient cooling looks like. It
http://youtu.be/CIbnl3Pj15w
--
Prentice
*Sent:* Tuesday, November 06, 2018 8:17 AM
*Subject:* Re: [Beowulf] More about those underwater data centers
. And serviceability is challenging. You need to pull the "wet" boards
out, or you need to connect and disconnect fluid connectors, etc. If
you're in an environment where you can manage that (or are forced into it
by necessity), then you can do it.
I think everyone on this list already knows I'm no fan of mineral oil
immersion (It just seems to messy to me. Sorry, Stu), but immersion cooling
with other liquids, such as 3M Novec engineered fluid addresses a lot of
your concerns. It as a low boiling point, not much above room temperature,
and it was originally meant to be an electronic parts cleaner (according to
a 3M rep at the 3M booth at SC a few years ago, so if you pull a component
out of it, it dries very quickly and should be immaculately clean.
The low boiling point is an excellent feature for heat transfer, too,
since it boils from the heat of the processor (ebullient cooling). This
change of state absorbs a lot of energy, making it very effective at
transferring heat away from the processor. The vapor can then rise and
condense on a heat exchanger with a chilled water heat exchanger, where it
again transfers a lot of heat through a change of state.
Prentice
I refute both these claims.
You DO want to run your boards immersed in coolant. It works wonderfully
well, is easy to live with, servicing is easy... and saves you almost 1/2
your power bill.
People are scared of immersion cooling, but it isn't that difficult to
live with. Some things are harder but other things are way easier. In
total, it balances out.
Also, given the greater reliability of components you get, you do less servicing.
If you haven't lived with it, you really have no idea what you are missing.
Serviceability is NOT challenging.
You really do NOT want to run boards immersed in coolant - yeah, there's
folks doing it at HPC scale
Whatever the coolant, it leaks, it oozes, it gets places you don't want it
to go. And serviceability is challenging. You need to pull the "wet" boards
out, or you need to connect and disconnect fluid connectors, etc. If
you're in an environment where you can manage that (or are forced into it
by necessity), then you can do it.
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
Lux, Jim (337K) via Beowulf
2018-11-07 13:29:23 UTC
Permalink
The “boilers” for high power tubes have “warts” all over the inside, specifically to provide nucleation sites.

But this brings up a whole bootstrapping thing – use a cluster to do CFD for the cooling for the next cluster.


From: Beowulf <beowulf-***@beowulf.org> on behalf of "***@beowulf.org" <***@beowulf.org>
Reply-To: John Hearns <***@googlemail.com>
Date: Wednesday, November 7, 2018 at 12:13 AM
To: "***@beowulf.org" <***@beowulf.org>
Subject: Re: [Beowulf] More about those underwater data centers

Thinking about liquid cooling , and the ebuillient cooling, the main sources of heat on our current architecture servers are the CPU package and the voltage regulators. Then the DIMMs.
Concentrating on the CPU die package, it is engineered with a flat metal surface which is intended to have a thermal paste to transfer heat across to a flat metal heatsink.
Those heatsinks are finned to have air blown across them to transport the heat away.

In liquid immersion should we be looking at having a spiky surface on the CPU die packages and the voltage regulators?
Maybe we should spray the entire board with a 'flocking'' compound and give it a matt finish!
I am being semi-serious. I guess a lot of CFD simulation done regarding air cooling with fins.
How much work has gone into pointy surfaces on the die package, which would increase contact area of course and also act as nucleation points for bubbles?

One interesting experiment to do - assuming the flat areas of the CPU in an immersive system do not have (non thermal paste) heatsinks bolted on:
take two systems and roughen up the die package surfacewith sandpaper on one. Compare temperatures.

ps. I can't resist adding this. Sorry Stu . http://youtu.be/kHnifVTSFEo
I guess Kenneth Williams is a typical vendor Site Engineer.
pps. the actress in the redress had her career ruined by this film - she ver got a serious role again after perfectly being typecast.







On Tue, 6 Nov 2018 at 22:57, Prentice Bisbal via Beowulf <***@beowulf.org<mailto:***@beowulf.org>> wrote:
On 11/06/2018 02:03 PM, Lux, Jim (337K) wrote:
True enough.
Ebullient cooling does have some challenges – you can form vapor films, which are good insulators, but if you get the system working right, nothing beats phase changes for a heat transfer.
If I recall what I learned in my Transport Phenomena classes in engineering school, you need a reasonably high temperature difference to get a stable film like that. For that to happen, radiant heat transfer needs to be the dominant heat transfer mechanism, in the range of operation we are talking about, the temperature difference isn't that great, and conduction is still the dominant form of heat transfer.

Here's an example of what 3M Novec ebullient cooling looks like. It doesn't look like it's anywhere near the film boiling regime:

http://youtu.be/CIbnl3Pj15w
--
Prentice




From: Beowulf [mailto:beowulf-***@beowulf.org] On Behalf Of Prentice Bisbal via Beowulf
Sent: Tuesday, November 06, 2018 8:17 AM
To: ***@beowulf.org<mailto:***@beowulf.org>
Subject: Re: [Beowulf] More about those underwater data centers

. And serviceability is challenging. You need to pull the "wet" boards out, or you need to connect and disconnect fluid connectors, etc. If you're in an environment where you can manage that (or are forced into it by necessity), then you can do it.

I think everyone on this list already knows I'm no fan of mineral oil immersion (It just seems to messy to me. Sorry, Stu), but immersion cooling with other liquids, such as 3M Novec engineered fluid addresses a lot of your concerns. It as a low boiling point, not much above room temperature, and it was originally meant to be an electronic parts cleaner (according to a 3M rep at the 3M booth at SC a few years ago, so if you pull a component out of it, it dries very quickly and should be immaculately clean.

The low boiling point is an excellent feature for heat transfer, too, since it boils from the heat of the processor (ebullient cooling). This change of state absorbs a lot of energy, making it very effective at transferring heat away from the processor. The vapor can then rise and condense on a heat exchanger with a chilled water heat exchanger, where it again transfers a lot of heat through a change of state.

Prentice
On 11/05/2018 06:30 PM, Stu Midgley wrote:
I refute both these claims.

You DO want to run your boards immersed in coolant. It works wonderfully well, is easy to live with, servicing is easy... and saves you almost 1/2 your power bill.

People are scared of immersion cooling, but it isn't that difficult to live with. Some things are harder but other things are way easier. In total, it balances out.

Also, given the greater reliability of components you get, you do less servicing.

If you haven't lived with it, you really have no idea what you are missing.


Serviceability is NOT challenging.



You really do NOT want to run boards immersed in coolant - yeah, there's folks doing it at HPC scale

Whatever the coolant, it leaks, it oozes, it gets places you don't want it to go. And serviceability is challenging. You need to pull the "wet" boards out, or you need to connect and disconnect fluid connectors, etc. If you're in an environment where you can manage that (or are forced into it by necessity), then you can do it.
--
Dr Stuart Midgley
***@gmail.com<mailto:***@gmail.com>



_______________________________________________

Beowulf mailing list, ***@beowulf.org<mailto:***@beowulf.org> sponsored by Penguin Computing

To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf


_______________________________________________
Beowulf mailing list, ***@beowulf.org<mailto:***@beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
Prentice Bisbal via Beowulf
2018-11-08 15:40:04 UTC
Permalink
Heat fins are used to increase the surface area used for heat transfer,
since the rate of energy transfer by conduction is directly proportional
the surface area. Heat fins are needed when air is involved because air
has such a low thermal conductivity.

Thermal conductivity of liquids are much high, so heat fins aren't as
necessary. For example, I've read that water can transfer heat orders of
magnitude better than air, so using water to remove hear from a
processor would need orders of magnitude less surface area for the same
energy transfer rate.

Also, liquids have higher viscosities than gases, so we have to worry
about 'boundary layers'. A boundary layer is area where the edge flowing
fluid is in contact with a solid. The friction between the liquid and
the solid slows down the fluid near the solid. This affects both gases
and liquids, but since liquids have higher viscosities, the effect is
more noticeable.

Think about a car's radiator - the air side has all the fins on it, and
the liquid side has smooth pipe walls.

https://en.wikipedia.org/wiki/Boundary_layer

Convection heat transfer is an equally important mode of heat transfer
in fluids, and in the boundary layer, where the liquids aren't moving as
fast, heat transfer isn't as good, so you need to keep your boundary
layer from becoming too thick.

Since fluids have much higher thermal conductivities, and boundary layer
effects are more of a concern, I actually think a smooth heat transfer
surface would be better in these immersion cooling cases. I'm sure
smaller,  more spaced out fins would probably help heat transfer
without  creating too much of a boundary layer, but making those heat
sinks adds cost for increased performance in a situation where it
probably isn't needed.

Now direct-contact cooling systems like Asetek products  do have fins on
the liquid side, if I remember correctly, but that in those systems,
there are pumps to provide forced convection. In immersion cooling, you
are relying on natural convection, so there isn't as much driving force
to overcome viscosity/boundary layer effects to force the liquid through
the heat fins.

That's my thoughts, anyway.

Prentice
Post by John Hearns via Beowulf
Thinking about liquid cooling , and the ebuillient cooling, the main
sources of heat on our current architecture servers are the CPU
package and the voltage regulators. Then the DIMMs.
Concentrating on the CPU die package, it is engineered with a flat
metal surface which is intended to have a thermal paste to transfer
heat across to a flat metal heatsink.
Those heatsinks are finned to have air blown across them to transport
the heat away.
In liquid immersion should we be looking at having a spiky surface on
the CPU die packages and the voltage regulators?
Maybe we should spray the entire board with a 'flocking'' compound and
give it a matt finish!
I am being semi-serious. I guess a lot of CFD simulation done
regarding air cooling with fins.
How much work has gone into pointy surfaces on the die package, which
would increase contact area of course and also act as nucleation
points for bubbles?
One interesting experiment to do - assuming the flat areas of the CPU
in an immersive system do not have (non thermal paste) heatsinks
take two systems and roughen up the die package surfacewith sandpaper
on one. Compare temperatures.
ps. I can't resist adding this. Sorry Stu .
http://youtu.be/kHnifVTSFEo
I guess Kenneth Williams is a typical vendor Site Engineer.
pps. the actress in the redress had her career ruined by this film -
she ver got a serious role again after perfectly being typecast.
On Tue, 6 Nov 2018 at 22:57, Prentice Bisbal via Beowulf
Post by Lux, Jim (337K) via Beowulf
True enough.
Ebullient cooling does have some challenges – you can form vapor
films, which are good insulators, but if you get the system
working right, nothing beats phase changes for a heat transfer.
If I recall what I learned in my Transport Phenomena classes in
engineering school, you need a reasonably high temperature
difference to get a stable film like that. For that to happen,
radiant heat transfer needs to be the dominant heat transfer
mechanism, in the range of operation we are talking about, the
temperature difference isn't that great, and conduction is still
the dominant form of heat transfer.
Here's an example of what 3M Novec ebullient cooling looks like.
http://youtu.be/CIbnl3Pj15w
--
Prentice
Post by Lux, Jim (337K) via Beowulf
*Prentice Bisbal via Beowulf
*Sent:* Tuesday, November 06, 2018 8:17 AM
*Subject:* Re: [Beowulf] More about those underwater data centers
. And serviceability is challenging. You need to pull the
"wet" boards out, or you need to connect and disconnect fluid
connectors, etc.  If you're in an environment where you can
manage that (or are forced into it by necessity), then you
can do it.
I think everyone on this list already knows I'm no fan of mineral
oil immersion (It just seems to messy to me. Sorry, Stu), but
immersion cooling with other liquids, such as 3M Novec engineered
fluid addresses a lot of your concerns. It as a low boiling
point, not much above room temperature, and it was originally
meant to be an electronic parts cleaner (according to a 3M rep at
the 3M booth at SC a few years ago, so if you pull a component
out of it, it dries very quickly and should be immaculately clean.
The low boiling point is an excellent feature for heat transfer,
too, since it boils from the heat of the processor (ebullient
cooling). This change of state absorbs a lot of energy, making it
very effective at transferring heat away from the processor. The
vapor can then rise and condense on a heat exchanger with a
chilled water heat exchanger, where it again transfers a lot of
heat through a change of state.
Prentice
I refute both these claims.
You DO want to run your boards immersed in coolant.  It works
wonderfully well, is easy to live with, servicing is easy...
and saves you almost 1/2 your power bill.
People are scared of immersion cooling, but it isn't that
difficult to live with.  Some things are harder but other
things are way easier.  In total, it balances out.
Also, given the greater reliability of components you get,
you do less servicing.
If you haven't lived with it, you really have no idea what
you are missing.
Serviceability is NOT challenging.
You really do NOT want to run boards immersed in coolant
- yeah, there's folks doing it at HPC scale
Whatever the coolant, it leaks, it oozes, it gets places
you don't want it to go. And serviceability is
challenging. You need to pull the "wet" boards out, or
you need to connect and disconnect fluid connectors, etc.
If you're in an environment where you can manage that (or
are forced into it by necessity), then you can do it.
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visithttp://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
Prentice Bisbal via Beowulf
2018-11-08 15:46:08 UTC
Permalink
One comment - my dissertation below is specifically about non-ebullient
immersion cooling. As Jim Lux pointed out in a later e-mail, in
ebullient cooling, some kind of surface feature to promote nucleation
could be beneficial. Ebbulient cooling is a whole different beast from
normal (non-ebullient) immersive cooling, since in that case you have
changes of state and gas bubbles flowing through a liquid.

However, in all of the live and video demonstrations I've seen of Novec,
the processors were completely bare, bubbles were forming at a pretty
rapid rate, so again I think creating some sort of heat sink for this
would add cost with no significant benefit.

Prentice Bisbal
Lead Software Engineer
Princeton Plasma Physics Laboratory
http://www.pppl.gov
Post by Prentice Bisbal via Beowulf
Heat fins are used to increase the surface area used for heat
transfer, since the rate of energy transfer by conduction is directly
proportional the surface area. Heat fins are needed when air is
involved because air has such a low thermal conductivity.
Thermal conductivity of liquids are much high, so heat fins aren't as
necessary. For example, I've read that water can transfer heat orders
of magnitude better than air, so using water to remove hear from a
processor would need orders of magnitude less surface area for the
same energy transfer rate.
Also, liquids have higher viscosities than gases, so we have to worry
about 'boundary layers'. A boundary layer is area where the edge
flowing fluid is in contact with a solid. The friction between the
liquid and the solid slows down the fluid near the solid. This affects
both gases and liquids, but since liquids have higher viscosities, the
effect is more noticeable.
Think about a car's radiator - the air side has all the fins on it,
and the liquid side has smooth pipe walls.
https://en.wikipedia.org/wiki/Boundary_layer
Convection heat transfer is an equally important mode of heat transfer
in fluids, and in the boundary layer, where the liquids aren't moving
as fast, heat transfer isn't as good, so you need to keep your
boundary layer from becoming too thick.
Since fluids have much higher thermal conductivities, and boundary
layer effects are more of a concern, I actually think a smooth heat
transfer surface would be better in these immersion cooling cases. I'm
sure smaller,  more spaced out fins would probably help heat transfer
without  creating too much of a boundary layer, but making those heat
sinks adds cost for increased performance in a situation where it
probably isn't needed.
Now direct-contact cooling systems like Asetek products  do have fins
on the liquid side, if I remember correctly, but that in those
systems, there are pumps to provide forced convection. In immersion
cooling, you are relying on natural convection, so there isn't as much
driving force to overcome viscosity/boundary layer effects to force
the liquid through the heat fins.
That's my thoughts, anyway.
Prentice
Post by John Hearns via Beowulf
Thinking about liquid cooling , and the ebuillient cooling, the main
sources of heat on our current architecture servers are the CPU
package and the voltage regulators. Then the DIMMs.
Concentrating on the CPU die package, it is engineered with a flat
metal surface which is intended to have a thermal paste to transfer
heat across to a flat metal heatsink.
Those heatsinks are finned to have air blown across them to transport
the heat away.
In liquid immersion should we be looking at having a spiky surface on
the CPU die packages and the voltage regulators?
Maybe we should spray the entire board with a 'flocking'' compound
and give it a matt finish!
I am being semi-serious. I guess a lot of CFD simulation  done
regarding air cooling with fins.
How much work has gone into pointy surfaces on the die package, which
would increase contact area of course and also act as nucleation
points for bubbles?
One interesting experiment to do - assuming the flat areas of the CPU
in an immersive system do not have (non thermal paste) heatsinks
take two systems and roughen up the die package surfacewith sandpaper
on one. Compare temperatures.
ps. I can't resist adding this. Sorry Stu .
http://youtu.be/kHnifVTSFEo
I guess Kenneth Williams is a typical vendor Site Engineer.
pps. the actress in the redress had her career ruined by this film -
she ver got a serious role again after perfectly being typecast.
On Tue, 6 Nov 2018 at 22:57, Prentice Bisbal via Beowulf
Post by Lux, Jim (337K) via Beowulf
True enough.
Ebullient cooling does have some challenges – you can form vapor
films, which are good insulators, but if you get the system
working right, nothing beats phase changes for a heat transfer.
If I recall what I learned in my Transport Phenomena classes in
engineering school, you need a reasonably high temperature
difference to get a stable film like that. For that to happen,
radiant heat transfer needs to be the dominant heat transfer
mechanism, in the range of operation we are talking about, the
temperature difference isn't that great, and conduction is still
the dominant form of heat transfer.
Here's an example of what 3M Novec ebullient cooling looks like.
http://youtu.be/CIbnl3Pj15w
--
Prentice
Post by Lux, Jim (337K) via Beowulf
Of *Prentice Bisbal via Beowulf
*Sent:* Tuesday, November 06, 2018 8:17 AM
*Subject:* Re: [Beowulf] More about those underwater data centers
. And serviceability is challenging. You need to pull the
"wet" boards out, or you need to connect and disconnect
fluid connectors, etc.  If you're in an environment where
you can manage that (or are forced into it by necessity),
then you can do it.
I think everyone on this list already knows I'm no fan of
mineral oil immersion (It just seems to messy to me. Sorry,
Stu), but immersion cooling with other liquids, such as 3M Novec
engineered fluid addresses a lot of your concerns. It as a low
boiling point, not much above room temperature, and it was
originally meant to be an electronic parts cleaner (according to
a 3M rep at the 3M booth at SC a few years ago, so if you pull a
component out of it, it dries very quickly and should be
immaculately clean.
The low boiling point is an excellent feature for heat transfer,
too, since it boils from the heat of the processor (ebullient
cooling). This change of state absorbs a lot of energy, making
it very effective at transferring heat away from the processor.
The vapor can then rise and condense on a heat exchanger with a
chilled water heat exchanger, where it again transfers a lot of
heat through a change of state.
Prentice
I refute both these claims.
You DO want to run your boards immersed in coolant.  It
works wonderfully well, is easy to live with, servicing is
easy... and saves you almost 1/2 your power bill.
People are scared of immersion cooling, but it isn't that
difficult to live with.  Some things are harder but other
things are way easier.  In total, it balances out.
Also, given the greater reliability of components you get,
you do less servicing.
If you haven't lived with it, you really have no idea what
you are missing.
Serviceability is NOT challenging.
You really do NOT want to run boards immersed in coolant
- yeah, there's folks doing it at HPC scale
Whatever the coolant, it leaks, it oozes, it gets places
you don't want it to go. And serviceability is
challenging. You need to pull the "wet" boards out, or
you need to connect and disconnect fluid connectors,
etc.  If you're in an environment where you can manage
that (or are forced into it by necessity), then you can do it.
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visithttp://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visithttp://www.beowulf.org/mailman/listinfo/beowulf
Joe Landman
2018-11-08 16:06:23 UTC
Permalink
Post by Prentice Bisbal via Beowulf
One comment - my dissertation below is specifically about
non-ebullient immersion cooling. As Jim Lux pointed out in a later
e-mail, in ebullient cooling, some kind of surface feature to promote
nucleation could be beneficial. Ebbulient cooling is a whole different
beast from normal (non-ebullient) immersive cooling, since in that
case you have changes of state and gas bubbles flowing through a liquid.
However, in all of the live and video demonstrations I've seen of
Novec, the processors were completely bare, bubbles were forming at a
pretty rapid rate, so again I think creating some sort of heat sink
for this would add cost with no significant benefit.
I get to use physics ... whee!

Short version ... most (all?) heats of vaporization (the energy you have
to pour into a liquid to turn it from a liquid to a gas at its boiling
point temperature/pressure) are (much) higher than the energy you
deposit into the same mass of liquid to bring it from just above
freezing to boiling.

Cv for water is about 4.186 J/(gram * C), so 1 gram of water, going from
just above 0 C (freezing point) to 100 C (boiling point at sea level)
means Q = m Cv delta_T = 418.6 J.

Take that 1g of water at 100 C, and turn it into vapor at 100 C, and you
get Q = m Hv = 2256 J.

Put another way, evaporation cooling allows you to absorb more heat
(about 5x in this case) for the same mass.

That said, convective cooling is a somewhat different beast. Immersion
cooling is, I believe, primarily convective in nature.

I think DUG is primarily convective based, and it is sufficient for
their use case (please correct me if I am wrong).

There is a fundamental danger with evaporative cooling, in that one has
to make sure one does not ignore the vapor, or potential heat induced
reaction products of the vapor.  Fluorinert has some issues: 
https://en.wikipedia.org/wiki/Fluorinert#Toxicity if you overcook it ...
--
Joe Landman
e: ***@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman

_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowu
Prentice Bisbal via Beowulf
2018-11-08 16:41:57 UTC
Permalink
Post by Joe Landman
I get to use physics ... whee!
I can appreciate that. I hardly ever get to use my engineering degree,
which is why I become so vocal when the topic turns to heat
transfer/fluid mechanics.

--
Prentice
John Hearns via Beowulf
2018-11-08 17:47:57 UTC
Permalink
Alan Turing proved that gin would be excellent for use in delay lines
https://www.theregister.co.uk/2013/06/28/wilkes_centenary_mercury_memory/
I think we now need Prentice and Joe to prove that the ideal immersive
coolant medium is beer.

On Thu, 8 Nov 2018 at 16:42, Prentice Bisbal via Beowulf <
Post by Joe Landman
I get to use physics ... whee!
I can appreciate that. I hardly ever get to use my engineering degree,
which is why I become so vocal when the topic turns to heat transfer/fluid
mechanics.
--
Prentice
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
Douglas Eadline
2018-11-08 19:46:29 UTC
Permalink
Post by John Hearns via Beowulf
Alan Turing proved that gin would be excellent for use in delay lines
https://www.theregister.co.uk/2013/06/28/wilkes_centenary_mercury_memory/
I think we now need Prentice and Joe to prove that the ideal immersive
coolant medium is beer.
Not to far off, there are LED bulbs that are filled with
liquid silicone to distribute the heat and is the
stuff used in beer as a foaming agent. (My opinion
if your beer needs foaming agent, it is not real beer)

--
Doug
Post by John Hearns via Beowulf
On Thu, 8 Nov 2018 at 16:42, Prentice Bisbal via Beowulf <
Post by Joe Landman
I get to use physics ... whee!
I can appreciate that. I hardly ever get to use my engineering degree,
which is why I become so vocal when the topic turns to heat
transfer/fluid
mechanics.
--
Prentice
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
MailScanner: Clean
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Doug
--
MailScanner: Clean

_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http:
Lux, Jim (337K) via Beowulf
2018-11-09 03:17:29 UTC
Permalink
I’ll bet the surface is rough enough that there are plenty of nucleation centers. Consider things like leads on parts.

From: Beowulf <beowulf-***@beowulf.org> on behalf of "***@beowulf.org" <***@beowulf.org>
Reply-To: Prentice Bisbal <***@pppl.gov>
Date: Thursday, November 8, 2018 at 7:47 AM
To: "***@beowulf.org" <***@beowulf.org>
Subject: Re: [Beowulf] More about those underwater data centers


One comment - my dissertation below is specifically about non-ebullient immersion cooling. As Jim Lux pointed out in a later e-mail, in ebullient cooling, some kind of surface feature to promote nucleation could be beneficial. Ebbulient cooling is a whole different beast from normal (non-ebullient) immersive cooling, since in that case you have changes of state and gas bubbles flowing through a liquid.

However, in all of the live and video demonstrations I've seen of Novec, the processors were completely bare, bubbles were forming at a pretty rapid rate, so again I think creating some sort of heat sink for this would add cost with no significant benefit.

Prentice Bisbal

Lead Software Engineer

Princeton Plasma Physics Laboratory

http://www.pppl.gov
On 11/08/2018 10:40 AM, Prentice Bisbal wrote:

Heat fins are used to increase the surface area used for heat transfer, since the rate of energy transfer by conduction is directly proportional the surface area. Heat fins are needed when air is involved because air has such a low thermal conductivity.

Thermal conductivity of liquids are much high, so heat fins aren't as necessary. For example, I've read that water can transfer heat orders of magnitude better than air, so using water to remove hear from a processor would need orders of magnitude less surface area for the same energy transfer rate.

Also, liquids have higher viscosities than gases, so we have to worry about 'boundary layers'. A boundary layer is area where the edge flowing fluid is in contact with a solid. The friction between the liquid and the solid slows down the fluid near the solid. This affects both gases and liquids, but since liquids have higher viscosities, the effect is more noticeable.

Think about a car's radiator - the air side has all the fins on it, and the liquid side has smooth pipe walls.

https://en.wikipedia.org/wiki/Boundary_layer

Convection heat transfer is an equally important mode of heat transfer in fluids, and in the boundary layer, where the liquids aren't moving as fast, heat transfer isn't as good, so you need to keep your boundary layer from becoming too thick.

Since fluids have much higher thermal conductivities, and boundary layer effects are more of a concern, I actually think a smooth heat transfer surface would be better in these immersion cooling cases. I'm sure smaller, more spaced out fins would probably help heat transfer without creating too much of a boundary layer, but making those heat sinks adds cost for increased performance in a situation where it probably isn't needed.

Now direct-contact cooling systems like Asetek products do have fins on the liquid side, if I remember correctly, but that in those systems, there are pumps to provide forced convection. In immersion cooling, you are relying on natural convection, so there isn't as much driving force to overcome viscosity/boundary layer effects to force the liquid through the heat fins.

That's my thoughts, anyway.

Prentice
On 11/07/2018 04:12 AM, John Hearns via Beowulf wrote:
Thinking about liquid cooling , and the ebuillient cooling, the main sources of heat on our current architecture servers are the CPU package and the voltage regulators. Then the DIMMs.
Concentrating on the CPU die package, it is engineered with a flat metal surface which is intended to have a thermal paste to transfer heat across to a flat metal heatsink.
Those heatsinks are finned to have air blown across them to transport the heat away.

In liquid immersion should we be looking at having a spiky surface on the CPU die packages and the voltage regulators?
Maybe we should spray the entire board with a 'flocking'' compound and give it a matt finish!
I am being semi-serious. I guess a lot of CFD simulation done regarding air cooling with fins.
How much work has gone into pointy surfaces on the die package, which would increase contact area of course and also act as nucleation points for bubbles?

One interesting experiment to do - assuming the flat areas of the CPU in an immersive system do not have (non thermal paste) heatsinks bolted on:
take two systems and roughen up the die package surfacewith sandpaper on one. Compare temperatures.

ps. I can't resist adding this. Sorry Stu . http://youtu.be/kHnifVTSFEo
I guess Kenneth Williams is a typical vendor Site Engineer.
pps. the actress in the redress had her career ruined by this film - she ver got a serious role again after perfectly being typecast.







On Tue, 6 Nov 2018 at 22:57, Prentice Bisbal via Beowulf <***@beowulf.org<mailto:***@beowulf.org>> wrote:
On 11/06/2018 02:03 PM, Lux, Jim (337K) wrote:
True enough.
Ebullient cooling does have some challenges – you can form vapor films, which are good insulators, but if you get the system working right, nothing beats phase changes for a heat transfer.
If I recall what I learned in my Transport Phenomena classes in engineering school, you need a reasonably high temperature difference to get a stable film like that. For that to happen, radiant heat transfer needs to be the dominant heat transfer mechanism, in the range of operation we are talking about, the temperature difference isn't that great, and conduction is still the dominant form of heat transfer.

Here's an example of what 3M Novec ebullient cooling looks like. It doesn't look like it's anywhere near the film boiling regime:

http://youtu.be/CIbnl3Pj15w
--
Prentice




From: Beowulf [mailto:beowulf-***@beowulf.org] On Behalf Of Prentice Bisbal via Beowulf
Sent: Tuesday, November 06, 2018 8:17 AM
To: ***@beowulf.org<mailto:***@beowulf.org>
Subject: Re: [Beowulf] More about those underwater data centers

. And serviceability is challenging. You need to pull the "wet" boards out, or you need to connect and disconnect fluid connectors, etc. If you're in an environment where you can manage that (or are forced into it by necessity), then you can do it.

I think everyone on this list already knows I'm no fan of mineral oil immersion (It just seems to messy to me. Sorry, Stu), but immersion cooling with other liquids, such as 3M Novec engineered fluid addresses a lot of your concerns. It as a low boiling point, not much above room temperature, and it was originally meant to be an electronic parts cleaner (according to a 3M rep at the 3M booth at SC a few years ago, so if you pull a component out of it, it dries very quickly and should be immaculately clean.

The low boiling point is an excellent feature for heat transfer, too, since it boils from the heat of the processor (ebullient cooling). This change of state absorbs a lot of energy, making it very effective at transferring heat away from the processor. The vapor can then rise and condense on a heat exchanger with a chilled water heat exchanger, where it again transfers a lot of heat through a change of state.

Prentice
On 11/05/2018 06:30 PM, Stu Midgley wrote:
I refute both these claims.

You DO want to run your boards immersed in coolant. It works wonderfully well, is easy to live with, servicing is easy... and saves you almost 1/2 your power bill.

People are scared of immersion cooling, but it isn't that difficult to live with. Some things are harder but other things are way easier. In total, it balances out.

Also, given the greater reliability of components you get, you do less servicing.

If you haven't lived with it, you really have no idea what you are missing.


Serviceability is NOT challenging.



You really do NOT want to run boards immersed in coolant - yeah, there's folks doing it at HPC scale

Whatever the coolant, it leaks, it oozes, it gets places you don't want it to go. And serviceability is challenging. You need to pull the "wet" boards out, or you need to connect and disconnect fluid connectors, etc. If you're in an environment where you can manage that (or are forced into it by necessity), then you can do it.
--
Dr Stuart Midgley
***@gmail.com<mailto:***@gmail.com>



_______________________________________________

Beowulf mailing list, ***@beowulf.org<mailto:***@beowulf.org> sponsored by Penguin Computing

To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf


_______________________________________________
Beowulf mailing list, ***@beowulf.org<mailto:***@beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf




_______________________________________________

Beowulf mailing list, ***@beowulf.org<mailto:***@beowulf.org> sponsored by Penguin Computing

To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
John Hearns via Beowulf
2018-11-13 16:31:03 UTC
Permalink
Ooooh
. liquid cooling video from sC18
https://twitter.com/Yuryu/status/1062178413270786048

On Fri, 9 Nov 2018 at 03:42, Lux, Jim (337K) via Beowulf <
Post by Lux, Jim (337K) via Beowulf
I’ll bet the surface is rough enough that there are plenty of nucleation
centers. Consider things like leads on parts.
*Date: *Thursday, November 8, 2018 at 7:47 AM
*Subject: *Re: [Beowulf] More about those underwater data centers
One comment - my dissertation below is specifically about non-ebullient
immersion cooling. As Jim Lux pointed out in a later e-mail, in ebullient
cooling, some kind of surface feature to promote nucleation could be
beneficial. Ebbulient cooling is a whole different beast from normal
(non-ebullient) immersive cooling, since in that case you have changes of
state and gas bubbles flowing through a liquid.
However, in all of the live and video demonstrations I've seen of Novec,
the processors were completely bare, bubbles were forming at a pretty rapid
rate, so again I think creating some sort of heat sink for this would add
cost with no significant benefit.
Prentice Bisbal
Lead Software Engineer
Princeton Plasma Physics Laboratory
http://www.pppl.gov
Heat fins are used to increase the surface area used for heat transfer,
since the rate of energy transfer by conduction is directly proportional
the surface area. Heat fins are needed when air is involved because air has
such a low thermal conductivity.
Thermal conductivity of liquids are much high, so heat fins aren't as
necessary. For example, I've read that water can transfer heat orders of
magnitude better than air, so using water to remove hear from a processor
would need orders of magnitude less surface area for the same energy
transfer rate.
Also, liquids have higher viscosities than gases, so we have to worry
about 'boundary layers'. A boundary layer is area where the edge flowing
fluid is in contact with a solid. The friction between the liquid and the
solid slows down the fluid near the solid. This affects both gases and
liquids, but since liquids have higher viscosities, the effect is more
noticeable.
Think about a car's radiator - the air side has all the fins on it, and
the liquid side has smooth pipe walls.
https://en.wikipedia.org/wiki/Boundary_layer
Convection heat transfer is an equally important mode of heat transfer in
fluids, and in the boundary layer, where the liquids aren't moving as fast,
heat transfer isn't as good, so you need to keep your boundary layer from
becoming too thick.
Since fluids have much higher thermal conductivities, and boundary layer
effects are more of a concern, I actually think a smooth heat transfer
surface would be better in these immersion cooling cases. I'm sure
smaller, more spaced out fins would probably help heat transfer without
creating too much of a boundary layer, but making those heat sinks adds
cost for increased performance in a situation where it probably isn't
needed.
Now direct-contact cooling systems like Asetek products do have fins on
the liquid side, if I remember correctly, but that in those systems, there
are pumps to provide forced convection. In immersion cooling, you are
relying on natural convection, so there isn't as much driving force to
overcome viscosity/boundary layer effects to force the liquid through the
heat fins.
That's my thoughts, anyway.
Prentice
Thinking about liquid cooling , and the ebuillient cooling, the main
sources of heat on our current architecture servers are the CPU package and
the voltage regulators. Then the DIMMs.
Concentrating on the CPU die package, it is engineered with a flat metal
surface which is intended to have a thermal paste to transfer heat across
to a flat metal heatsink.
Those heatsinks are finned to have air blown across them to transport the heat away.
In liquid immersion should we be looking at having a spiky surface on the
CPU die packages and the voltage regulators?
Maybe we should spray the entire board with a 'flocking'' compound and
give it a matt finish!
I am being semi-serious. I guess a lot of CFD simulation done regarding
air cooling with fins.
How much work has gone into pointy surfaces on the die package, which
would increase contact area of course and also act as nucleation points for
bubbles?
One interesting experiment to do - assuming the flat areas of the CPU in
take two systems and roughen up the die package surfacewith sandpaper on
one. Compare temperatures.
ps. I can't resist adding this. Sorry Stu .
http://youtu.be/kHnifVTSFEo
I guess Kenneth Williams is a typical vendor Site Engineer.
pps. the actress in the redress had her career ruined by this film - she
ver got a serious role again after perfectly being typecast.
On Tue, 6 Nov 2018 at 22:57, Prentice Bisbal via Beowulf <
True enough.
Ebullient cooling does have some challenges – you can form vapor films,
which are good insulators, but if you get the system working right, nothing
beats phase changes for a heat transfer.
If I recall what I learned in my Transport Phenomena classes in
engineering school, you need a reasonably high temperature difference to
get a stable film like that. For that to happen, radiant heat transfer
needs to be the dominant heat transfer mechanism, in the range of operation
we are talking about, the temperature difference isn't that great, and
conduction is still the dominant form of heat transfer.
Here's an example of what 3M Novec ebullient cooling looks like. It
http://youtu.be/CIbnl3Pj15w
--
Prentice
*Sent:* Tuesday, November 06, 2018 8:17 AM
*Subject:* Re: [Beowulf] More about those underwater data centers
. And serviceability is challenging. You need to pull the "wet" boards
out, or you need to connect and disconnect fluid connectors, etc. If
you're in an environment where you can manage that (or are forced into it
by necessity), then you can do it.
I think everyone on this list already knows I'm no fan of mineral oil
immersion (It just seems to messy to me. Sorry, Stu), but immersion cooling
with other liquids, such as 3M Novec engineered fluid addresses a lot of
your concerns. It as a low boiling point, not much above room temperature,
and it was originally meant to be an electronic parts cleaner (according to
a 3M rep at the 3M booth at SC a few years ago, so if you pull a component
out of it, it dries very quickly and should be immaculately clean.
The low boiling point is an excellent feature for heat transfer, too,
since it boils from the heat of the processor (ebullient cooling). This
change of state absorbs a lot of energy, making it very effective at
transferring heat away from the processor. The vapor can then rise and
condense on a heat exchanger with a chilled water heat exchanger, where it
again transfers a lot of heat through a change of state.
Prentice
I refute both these claims.
You DO want to run your boards immersed in coolant. It works wonderfully
well, is easy to live with, servicing is easy... and saves you almost 1/2
your power bill.
People are scared of immersion cooling, but it isn't that difficult to
live with. Some things are harder but other things are way easier. In
total, it balances out.
Also, given the greater reliability of components you get, you do less servicing.
If you haven't lived with it, you really have no idea what you are missing.
Serviceability is NOT challenging.
You really do NOT want to run boards immersed in coolant - yeah, there's
folks doing it at HPC scale
Whatever the coolant, it leaks, it oozes, it gets places you don't want it
to go. And serviceability is challenging. You need to pull the "wet" boards
out, or you need to connect and disconnect fluid connectors, etc. If
you're in an environment where you can manage that (or are forced into it
by necessity), then you can do it.
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
j***@eagleeyet.net
2018-11-17 08:35:44 UTC
Permalink
Thats the kind of liquid cooling i was thinking of to be fair. yes it
would be messy for maintenance but hell you probably can squeeze alot
more performance out of the hardware no?
Ooooh…. liquid cooling video from sC18
https://twitter.com/Yuryu/status/1062178413270786048
On Fri, 9 Nov 2018 at 03:42, Lux, Jim (337K) via Beowulf
I’ll bet the surface is rough enough that there are plenty of
nucleation centers. Consider things like leads on parts.
DATE: Thursday, November 8, 2018 at 7:47 AM
SUBJECT: Re: [Beowulf] More about those underwater data centers
One comment - my dissertation below is specifically about
non-ebullient immersion cooling. As Jim Lux pointed out in a later
e-mail, in ebullient cooling, some kind of surface feature to
promote nucleation could be beneficial. Ebbulient cooling is a whole
different beast from normal (non-ebullient) immersive cooling, since
in that case you have changes of state and gas bubbles flowing
through a liquid.
However, in all of the live and video demonstrations I've seen of
Novec, the processors were completely bare, bubbles were forming at
a pretty rapid rate, so again I think creating some sort of heat
sink for this would add cost with no significant benefit.
Prentice Bisbal
Lead Software Engineer
Princeton Plasma Physics Laboratory
http://www.pppl.gov
Heat fins are used to increase the surface area used for heat
transfer, since the rate of energy transfer by conduction is
directly proportional the surface area. Heat fins are needed when
air is involved because air has such a low thermal conductivity.
Thermal conductivity of liquids are much high, so heat fins aren't
as necessary. For example, I've read that water can transfer heat
orders of magnitude better than air, so using water to remove hear
from a processor would need orders of magnitude less surface area
for the same energy transfer rate.
Also, liquids have higher viscosities than gases, so we have to
worry about 'boundary layers'. A boundary layer is area where the
edge flowing fluid is in contact with a solid. The friction between
the liquid and the solid slows down the fluid near the solid. This
affects both gases and liquids, but since liquids have higher
viscosities, the effect is more noticeable.
Think about a car's radiator - the air side has all the fins on it,
and the liquid side has smooth pipe walls.
https://en.wikipedia.org/wiki/Boundary_layer
Convection heat transfer is an equally important mode of heat
transfer in fluids, and in the boundary layer, where the liquids
aren't moving as fast, heat transfer isn't as good, so you need to
keep your boundary layer from becoming too thick.
Since fluids have much higher thermal conductivities, and boundary
layer effects are more of a concern, I actually think a smooth heat
transfer surface would be better in these immersion cooling cases.
I'm sure smaller, more spaced out fins would probably help heat
transfer without creating too much of a boundary layer, but making
those heat sinks adds cost for increased performance in a situation
where it probably isn't needed.
Now direct-contact cooling systems like Asetek products do have
fins on the liquid side, if I remember correctly, but that in those
systems, there are pumps to provide forced convection. In immersion
cooling, you are relying on natural convection, so there isn't as
much driving force to overcome viscosity/boundary layer effects to
force the liquid through the heat fins.
That's my thoughts, anyway.
Prentice
Thinking about liquid cooling , and the ebuillient cooling, the main
sources of heat on our current architecture servers are the CPU
package and the voltage regulators. Then the DIMMs.
Concentrating on the CPU die package, it is engineered with a flat
metal surface which is intended to have a thermal paste to transfer
heat across to a flat metal heatsink.
Those heatsinks are finned to have air blown across them to
transport the heat away.
In liquid immersion should we be looking at having a spiky surface
on the CPU die packages and the voltage regulators?
Maybe we should spray the entire board with a 'flocking'' compound
and give it a matt finish!
I am being semi-serious. I guess a lot of CFD simulation done
regarding air cooling with fins.
How much work has gone into pointy surfaces on the die package,
which would increase contact area of course and also act as
nucleation points for bubbles?
One interesting experiment to do - assuming the flat areas of the
CPU in an immersive system do not have (non thermal paste) heatsinks
take two systems and roughen up the die package surfacewith
sandpaper on one. Compare temperatures.
ps. I can't resist adding this. Sorry Stu .
http://youtu.be/kHnifVTSFEo
I guess Kenneth Williams is a typical vendor Site Engineer.
pps. the actress in the redress had her career ruined by this film -
she ver got a serious role again after perfectly being typecast.
On Tue, 6 Nov 2018 at 22:57, Prentice Bisbal via Beowulf
True enough.
Ebullient cooling does have some challenges – you can form vapor
films, which are good insulators, but if you get the system working
right, nothing beats phase changes for a heat transfer.
If I recall what I learned in my Transport Phenomena classes in
engineering school, you need a reasonably high temperature
difference to get a stable film like that. For that to happen,
radiant heat transfer needs to be the dominant heat transfer
mechanism, in the range of operation we are talking about, the
temperature difference isn't that great, and conduction is still the
dominant form of heat transfer.
Here's an example of what 3M Novec ebullient cooling looks like. It
http://youtu.be/CIbnl3Pj15w
--
Prentice
Prentice Bisbal via Beowulf
SENT: Tuesday, November 06, 2018 8:17 AM
SUBJECT: Re: [Beowulf] More about those underwater data centers
. And serviceability is challenging. You need to pull the "wet"
boards out, or you need to connect and disconnect fluid connectors,
etc. If you're in an environment where you can manage that (or are
forced into it by necessity), then you can do it.
I think everyone on this list already knows I'm no fan of mineral
oil immersion (It just seems to messy to me. Sorry, Stu), but
immersion cooling with other liquids, such as 3M Novec engineered
fluid addresses a lot of your concerns. It as a low boiling point,
not much above room temperature, and it was originally meant to be
an electronic parts cleaner (according to a 3M rep at the 3M booth
at SC a few years ago, so if you pull a component out of it, it
dries very quickly and should be immaculately clean.
The low boiling point is an excellent feature for heat transfer,
too, since it boils from the heat of the processor (ebullient
cooling). This change of state absorbs a lot of energy, making it
very effective at transferring heat away from the processor. The
vapor can then rise and condense on a heat exchanger with a chilled
water heat exchanger, where it again transfers a lot of heat through
a change of state.
Prentice
I refute both these claims.
You DO want to run your boards immersed in coolant. It works
wonderfully well, is easy to live with, servicing is easy... and
saves you almost 1/2 your power bill.
People are scared of immersion cooling, but it isn't that difficult
to live with. Some things are harder but other things are way
easier. In total, it balances out.
Also, given the greater reliability of components you get, you do less servicing.
If you haven't lived with it, you really have no idea what you are missing.
Serviceability is NOT challenging.
You really do NOT want to run boards immersed in coolant - yeah,
there's folks doing it at HPC scale
Whatever the coolant, it leaks, it oozes, it gets places you don't
want it to go. And serviceability is challenging. You need to pull
the "wet" boards out, or you need to connect and disconnect fluid
connectors, etc. If you're in an environment where you can manage
that (or are forced into it by necessity), then you can do it.
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf [1]
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
------
[1] http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.o
Prentice Bisbal via Beowulf
2018-11-19 21:11:57 UTC
Permalink
Actually, that particular liquid is flourinert. Flourinert boils at a
low temperature, so it evaporates pretty quickly at room temperature, so
it's not messy at all. Flourinert is similar to Novec, and Novec was
initially meant to be an electronics parts cleaner (according to the 3M
booth a few years ago), so part pulled out of that particular liquid
should be clean and dry in a few seconds after pulling them out of the
immersion tank.

That video is from the Allied Control booth
(http://www.allied-control.com/the-basics-of-immersion-cooling/). I
spent a decent amount of time there picking their brains on immersion
cooling. I also stopped by the 3M booth, and pestered them, too. Here's
what I found out:

1. I was wrong about not needed heatsink or some other surface treatment
for the CPUs/GPUs. CPUs & GPUS need to have a boiling enhanching coating
on them (BEC) to promote nucleation of the vapor bubbles (boiling). This
is a  pourous copper material. In the Allied examples, I believe this
was a separate metal plate held on to the processor with a clamp just
like a heatsink, but they can also be applied like a paint.

http://multimedia.3m.com/mws/media/563566O/3mtm-microporous-metallic-boiling-enhancement-coating-l-20227.pdf

https://www.1-act.com/boiling-enhancementmicro-porous-coatings/

2. Those lucite or Plexiglas (Perspex if your not American) tanks used
for demos are not good for everyday use. 3M recommends using welded
stainless steel. Glass can be used, but the caulk used to seal glass
tanks can be a problem. Some of the solids in the caulk can leach out
into the liquid and eventually foul the BEC. This web page below has
some good information on immersion cooling using 3M Novec:

https://www.3m.com/3M/en_US/novec-us/applications/immersion-cooling/

Towards the bottom, under the heading "Learn more about liquid immersion
cooling and 3M data center solutions" are links to informative PDFs and
videos. In particular, I like this one which provides best practices for
building your own immersion cooling solution.

https://multimedia.3m.com/mws/media/1010266O/3m-two-phase-immersion-cooling-best-practices-technical-paper.pdf


Prentice
Post by j***@eagleeyet.net
Thats the kind of liquid cooling i was thinking of to be fair. yes it
would be messy for maintenance but hell you probably can squeeze alot
more performance out of the hardware no?
Ooooh…. liquid cooling video from sC18
https://twitter.com/Yuryu/status/1062178413270786048
On Fri, 9 Nov 2018 at 03:42, Lux, Jim (337K) via Beowulf
I’ll bet the surface is rough enough that there are plenty of
nucleation centers.  Consider things like leads on parts.
DATE: Thursday, November 8, 2018 at 7:47 AM
SUBJECT: Re: [Beowulf] More about those underwater data centers
One comment - my dissertation below is specifically about
non-ebullient immersion cooling. As Jim Lux pointed out in a later
e-mail, in ebullient cooling, some kind of surface feature to
promote nucleation could be beneficial. Ebbulient cooling is a whole
different beast from normal (non-ebullient) immersive cooling, since
in that case you have changes of state and gas bubbles flowing
through a liquid.
However, in all of the live and video demonstrations I've seen of
Novec, the processors were completely bare, bubbles were forming at
a pretty rapid rate, so again I think creating some sort of heat
sink for this would add cost with no significant benefit.
Prentice Bisbal
Lead Software Engineer
Princeton Plasma Physics Laboratory
http://www.pppl.gov
Heat fins are used to increase the surface area used for heat
transfer, since the rate of energy transfer by conduction is
directly proportional the surface area. Heat fins are needed when
air is involved because air has such a low thermal conductivity.
Thermal conductivity of liquids are much high, so heat fins aren't
as necessary. For example, I've read that water can transfer heat
orders of magnitude better than air, so using water to remove hear
from a processor would need orders of magnitude less surface area
for the same energy transfer rate.
Also, liquids have higher viscosities than gases, so we have to
worry about 'boundary layers'. A boundary layer is area where the
edge flowing fluid is in contact with a solid. The friction between
the liquid and the solid slows down the fluid near the solid. This
affects both gases and liquids, but since liquids have higher
viscosities, the effect is more noticeable.
Think about a car's radiator - the air side has all the fins on it,
and the liquid side has smooth pipe walls.
https://en.wikipedia.org/wiki/Boundary_layer
Convection heat transfer is an equally important mode of heat
transfer in fluids, and in the boundary layer, where the liquids
aren't moving as fast, heat transfer isn't as good, so you need to
keep your boundary layer from becoming too thick.
Since fluids have much higher thermal conductivities, and boundary
layer effects are more of a concern, I actually think a smooth heat
transfer surface would be better in these immersion cooling cases.
I'm sure smaller,  more spaced out fins would probably help heat
transfer without  creating too much of a boundary layer, but making
those heat sinks adds cost for increased performance in a situation
where it probably isn't needed.
Now direct-contact cooling systems like Asetek products  do have
fins on the liquid side, if I remember correctly, but that in those
systems, there are pumps to provide forced convection. In immersion
cooling, you are relying on natural convection, so there isn't as
much driving force to overcome viscosity/boundary layer effects to
force the liquid through the heat fins.
That's my thoughts, anyway.
Prentice
Thinking about liquid cooling , and the ebuillient cooling, the main
sources of heat on our current architecture servers are the CPU
package and the voltage regulators. Then the DIMMs.
Concentrating on the CPU die package, it is engineered with a flat
metal surface which is intended to have a thermal paste to transfer
heat across to a flat metal heatsink.
Those heatsinks are finned to have air blown across them to
transport the heat away.
In liquid immersion should we be looking at having a spiky surface
on the CPU die packages and the voltage regulators?
Maybe we should spray the entire board with a 'flocking'' compound
and give it a matt finish!
I am being semi-serious. I guess a lot of CFD simulation  done
regarding air cooling with fins.
How much work has gone into pointy surfaces on the die package,
which would increase contact area of course and also act as
nucleation points for bubbles?
One interesting experiment to do - assuming the flat areas of the
CPU in an immersive system do not have (non thermal paste) heatsinks
take two systems and roughen up the die package surfacewith
sandpaper on one. Compare temperatures.
ps. I can't resist adding this. Sorry Stu .
http://youtu.be/kHnifVTSFEo
I guess Kenneth Williams is a typical vendor Site Engineer.
pps. the actress in the redress had her career ruined by this film -
she ver got a serious role again after perfectly being typecast.
On Tue, 6 Nov 2018 at 22:57, Prentice Bisbal via Beowulf
True enough.
Ebullient cooling does have some challenges – you can form vapor
films, which are good insulators, but if you get the system working
right, nothing beats phase changes for a heat transfer.
If I recall what I learned in my Transport Phenomena classes in
engineering school, you need a reasonably high temperature
difference to get a stable film like that. For that to happen,
radiant heat transfer needs to be the dominant heat transfer
mechanism, in the range of operation we are talking about, the
temperature difference isn't that great, and conduction is still the
dominant form of heat transfer.
Here's an example of what 3M Novec ebullient cooling looks like. It
http://youtu.be/CIbnl3Pj15w
--
Prentice
Prentice Bisbal via Beowulf
SENT: Tuesday, November 06, 2018 8:17 AM
SUBJECT: Re: [Beowulf] More about those underwater data centers
. And serviceability is challenging. You need to pull the "wet"
boards out, or you need to connect and disconnect fluid connectors,
etc.  If you're in an environment where you can manage that (or are
forced into it by necessity), then you can do it.
I think everyone on this list already knows I'm no fan of mineral
oil immersion (It just seems to messy to me. Sorry, Stu), but
immersion cooling with other liquids, such as 3M Novec engineered
fluid addresses a lot of your concerns. It as a low boiling point,
not much above room temperature, and it was originally meant to be
an electronic parts cleaner (according to a 3M rep at the 3M booth
at SC a few years ago, so if you pull a component out of it, it
dries very quickly and should be immaculately clean.
The low boiling point is an excellent feature for heat transfer,
too, since it boils from the heat of the processor (ebullient
cooling). This change of state absorbs a lot of energy, making it
very effective at transferring heat away from the processor. The
vapor can then rise and condense on a heat exchanger with a chilled
water heat exchanger, where it again transfers a lot of heat through
a change of state.
Prentice
I refute both these claims.
You DO want to run your boards immersed in coolant.  It works
wonderfully well, is easy to live with, servicing is easy... and
saves you almost 1/2 your power bill.
People are scared of immersion cooling, but it isn't that difficult
to live with.  Some things are harder but other things are way
easier.  In total, it balances out.
Also, given the greater reliability of components you get, you do less servicing.
If you haven't lived with it, you really have no idea what you are missing.
Serviceability is NOT challenging.
You really do NOT want to run boards immersed in coolant - yeah,
there's folks doing it at HPC scale
Whatever the coolant, it leaks, it oozes, it gets places you don't
want it to go. And serviceability is challenging. You need to pull
the "wet" boards out, or you need to connect and disconnect fluid
connectors, etc.  If you're in an environment where you can manage
that (or are forced into it by necessity), then you can do it.
--
Dr Stuart Midgley
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf [1]
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
 _______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
------
[1] http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit ht
Lux, Jim (337K) via Beowulf
2018-11-06 18:52:54 UTC
Permalink
I suppose, we are talking about HPC here with already large infrastructure, so the “incremental pain” from having to deal with immersion cooling is probably smaller than the “incremental pain” from having to air cool a substantially larger facility – floor space costs money in many ways. Having a rig to hoist electronics out of the vat (or drain the tank, or however) isn’t all that much different than having any other specialized support equipment –

If you had a few dozen processors, probably not worth it – get up to 1000 nodes, and it winds up being small in comparison.

However, I think it is still pretty exotic – you guys have it figured out, and that’s probably part of your secret sauce – I wouldn’t recommend it for the “casual” or “small” HPC installation.

OTOH, if someone wants to try it, have at it – it does work really well from a thermal standpoint – moving liquid around is MUCH more efficient than moving air around.



Jim Lux
(818)354-2075 (office)
(818)395-2714 (cell)

From: Stu Midgley [mailto:***@gmail.com]
Sent: Monday, November 05, 2018 3:31 PM
To: Lux, Jim (337K) <***@jpl.nasa.gov>
Cc: Jonathan Aquilina <***@eagleeyet.net>; ***@gmail.com; Beowulf List <***@beowulf.org>
Subject: Re: [Beowulf] More about those underwater data centers

I refute both these claims.

You DO want to run your boards immersed in coolant. It works wonderfully well, is easy to live with, servicing is easy... and saves you almost 1/2 your power bill.

People are scared of immersion cooling, but it isn't that difficult to live with. Some things are harder but other things are way easier. In total, it balances out.

Also, given the greater reliability of components you get, you do less servicing.

If you haven't lived with it, you really have no idea what you are missing.


Serviceability is NOT challenging.



You really do NOT want to run boards immersed in coolant - yeah, there's folks doing it at HPC scale

Whatever the coolant, it leaks, it oozes, it gets places you don't want it to go. And serviceability is challenging. You need to pull the "wet" boards out, or you need to connect and disconnect fluid connectors, etc. If you're in an environment where you can manage that (or are forced into it by necessity), then you can do it.
--
Dr Stuart Midgley
***@gmail.com<mailto:***@gmail.com>
Prentice Bisbal via Beowulf
2018-11-05 15:41:58 UTC
Permalink
Maybe it really has nothing to do with being close to people, and
Microsoft really wants to cater to the high-frequency trading (HFT)
crowd. According to this paper, there's a number of optimal locations
for HFT between two different markets that are located in the middle of
the ocean.

https://journals.aps.org/pre/abstract/10.1103/PhysRevE.82.056104

Prentice
Post by Lux, Jim (337K) via Beowulf
https://arstechnica.com/gadgets/2018/11/satya-nadella-the-cloud-is-going-to-move-underwater/
He cites proximity to humans as a particular advantage: about 50 percent of the world's population lives within 120 miles of a coast. Putting servers in the ocean means that they can be near population centers, which in turn ensures lower latencies. Low latencies are particularly important for real-time services, including Microsoft's forthcoming https://arstechnica.com/gadgets/2018/10/microsoft-announces-project-xcloud-xbox-game-streaming-for-myriad-devices/.
He cites proximity to humans as a particular advantage: about 50 percent of the world's population lives within 120 miles of a coast. Putting servers in the ocean means that they can be near population centers, which in turn ensures lower latencies. Low latencies are particularly important for real-time services, including Microsoft's forthcoming https://arstechnica.com/gadgets/2018/10/microsoft-announces-project-xcloud-xbox-game-streaming-for-myriad-devices/.
I’m not sure there’s a huge population of Xcloud-Xbox gamers in Orkney. There's not much daylight this time of year, of course, so maybe that's what those Orcadians are up to.
And I believe that 100% of the UK's population lives within 120 miles of a coast. ("coast" gets around the often contentious discussion of where the "sea" starts in the face of tidal estuaries and tidal flats - I was struck by the sheer volume of discussion related to "what point in the UK is farthest from the sea")
_______________________________________________
Beowulf mailing list, ***@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beo
Continue reading on narkive:
Loading...