Discussion:
[PATCH] Added captive portal possibility
(too old to reply)
Alexandre Chataignon
2015-09-23 09:20:47 UTC
Permalink
Hi,

This patch adds the possibility to enable a captive portal while connman
is in tethering mode by sending its own IP via the DNS proxy (which acts
as a DNS server more than a proxy in this case).

I needed this usage for an IoT device, in which the tethering mode is in
fact a hotspot mode. With a light http server and patched connman, it
offers the possibility to have a light and easy-to-use captive portal
for example to diagnostic device.

The captive portal is enabled in settings file, via the key
Tethering.Captive=true (defaults to false)

It has been used for 1 month on our devices without failures.

Thanks for your reviews,

--
Alexandre Chataignon
Craig McQueen
2015-10-16 05:52:35 UTC
Permalink
This patch adds the possibility to enable a captive portal while connman is in
tethering mode by sending its own IP via the DNS proxy (which acts as a DNS
server more than a proxy in this case).
I needed this usage for an IoT device, in which the tethering mode is in fact a
hotspot mode. With a light http server and patched connman, it offers the
possibility to have a light and easy-to-use captive portal for example to
diagnostic device.
The captive portal is enabled in settings file, via the key
Tethering.Captive=true (defaults to false)
It has been used for 1 month on our devices without failures.
Thanks for your reviews,
I'm interested in this concept. I applied your patch to ConnMan 1.30, and I've been testing it.

In my testing with DNS queries using the Linux 'dig' utility, I found it returns malformed responses. That is because the 'dig' query includes an additional record (ARCOUNT=1), of type OPT (41). When your patch send_response_A() creates the response, it just uses the total length of the query (which includes the question and the additional record), but then sets the ARCOUNT=0, so that doesn't account for a query that contains additional records. So the response appears to have just one answer record of type "OPT", followed by an additional 16 bytes of junk.

So the response is malformed for any query that includes more than one question, and more than zero additional records.

I also note that this patch sends an A record response to any query, with a link to the query name. But it doesn't make sense to send an A record answer if the query wasn't an A record question. E.g. if the query is a PTR (reverse lookup) question for 216.58.220.110, it doesn't make sense to respond with an A record saying the IP address of 110.2200.58.216.in-addr.arpa is 192.168.1.1.

So the patch would need to be improved as follows:

* Check that the query first question is for an A record, and only send an A record if so.
* Calculate the correct length of the query header and question record(s), in order to position the answer in the correct location in the response.
* Calculate the correct total length of the response.
* Consider whether an "OPT" or other additional record should be copied into the answer
* Consider how to handle a query with more than one question.
--
Craig McQueen
Patrik Flykt
2015-10-19 07:57:33 UTC
Permalink
Post by Craig McQueen
* Check that the query first question is for an A record, and only send an A record if so.
* Calculate the correct length of the query header and question
record(s), in order to position the answer in the correct location in
the response.
* Calculate the correct total length of the response.
* Consider whether an "OPT" or other additional record should be copied into the answer
* Consider how to handle a query with more than one question.
Let's attack the captive portal situation by redirecting traffic from
the hosts to local ports using netfilter. It's a much more elegant and
modular solution than to modify DNS replies from ConnMan.

Cheers,

Patrik
Alexandre Chataignon
2015-10-19 08:29:23 UTC
Permalink
Post by Craig McQueen
I'm interested in this concept. I applied your patch to ConnMan 1.30, and I've been testing it.
In my testing with DNS queries using the Linux 'dig' utility, I found it returns malformed responses. That is because the 'dig' query includes an additional record (ARCOUNT=1), of type OPT (41). When your patch send_response_A() creates the response, it just uses the total length of the query (which includes the question and the additional record), but then sets the ARCOUNT=0, so that doesn't account for a query that contains additional records. So the response appears to have just one answer record of type "OPT", followed by an additional 16 bytes of junk.
So the response is malformed for any query that includes more than one question, and more than zero additional records.
I also note that this patch sends an A record response to any query, with a link to the query name. But it doesn't make sense to send an A record answer if the query wasn't an A record question. E.g. if the query is a PTR (reverse lookup) question for 216.58.220.110, it doesn't make sense to respond with an A record saying the IP address of 110.2200.58.216.in-addr.arpa is 192.168.1.1.
* Check that the query first question is for an A record, and only send an A record if so.
* Calculate the correct length of the query header and question record(s), in order to position the answer in the correct location in the response.
* Calculate the correct total length of the response.
* Consider whether an "OPT" or other additional record should be copied into the answer
* Consider how to handle a query with more than one question.
Hi Craig,

Thank you very much for your review.

It's true, I haven't thought about more complex DNS requests than a
simple and unique A request (which works in majority of case).

I will try to correct this and provide a new patch soon.

Alexandre
Craig McQueen
2015-10-20 01:57:36 UTC
Permalink
Post by Craig McQueen
Post by Craig McQueen
I'm interested in this concept. I applied your patch to ConnMan 1.30, and
I've been testing it.
Post by Craig McQueen
In my testing with DNS queries using the Linux 'dig' utility, I found it returns
malformed responses. That is because the 'dig' query includes an additional
record (ARCOUNT=1), of type OPT (41). When your patch
send_response_A() creates the response, it just uses the total length of the
query (which includes the question and the additional record), but then sets
the ARCOUNT=0, so that doesn't account for a query that contains additional
records. So the response appears to have just one answer record of type
"OPT", followed by an additional 16 bytes of junk.
Post by Craig McQueen
So the response is malformed for any query that includes more than one
question, and more than zero additional records.
Post by Craig McQueen
I also note that this patch sends an A record response to any query, with a
link to the query name. But it doesn't make sense to send an A record
answer if the query wasn't an A record question. E.g. if the query is a PTR
(reverse lookup) question for 216.58.220.110, it doesn't make sense to
respond with an A record saying the IP address of 110.2200.58.216.in-
addr.arpa is 192.168.1.1.
Post by Craig McQueen
* Check that the query first question is for an A record, and only send an A
record if so.
Post by Craig McQueen
* Calculate the correct length of the query header and question record(s),
in order to position the answer in the correct location in the response.
Post by Craig McQueen
* Calculate the correct total length of the response.
* Consider whether an "OPT" or other additional record should be copied into the answer
* Consider how to handle a query with more than one question.
Hi Craig,
Thank you very much for your review.
It's true, I haven't thought about more complex DNS requests than a simple
and unique A request (which works in majority of case).
I will try to correct this and provide a new patch soon.
I've made some patches, which I'll send through shortly. They fix the above issues (except possibly the OPT additional record), and also deal with another issue:

* Internal DNS queries on the loopback interface should be permitted. That allows the device itself to access DNS servers on the Internet, while blocking clients on tethered interface(s) from doing so.
Post by Craig McQueen
A captive portal is easier to implement by setting a NAT rule that redirects
incoming connections to a desired port on the local x.x.x.1 tethering
interface.
True, but:

* Allowing DNS queries constitutes a potential side-channel of communication, which could be a security issue for some applications. It's arguable how important that is.
* I also want the device itself to be accessible with some DNS name. e.g. (at minimum) my-device.lan, or serial-number.lan. The captive portal DNS idea is a simpler way to achieve that goal. What would you suggest as an alternative implementation of that functionality (that can simply serve ConnMan's latest tether IP address in response to my- device.lan)?
Post by Craig McQueen
Once the captive portal stuff is cleared out, the redirecting NAT
rule for that host can be modified not to capture connections. It's much
simpler than having to rewrite DNS replies from ConnMan. Your patch
enforces captive portal implementation for each and every device all the
time, for sure other use cases include letting hosts through that have
completed the steps enforced by the captive portal.
Good point. I will send through a patch that allows loopback interface to do normal DNS look-ups. That would partially address that (e.g. allowing a captive portal to provide an HTTP proxy server), but wouldn't be a complete solution (e.g. for a mail server).
Post by Craig McQueen
Let's attack the captive portal situation by redirecting traffic from the hosts to
local ports using netfilter. It's a much more elegant and modular solution than
to modify DNS replies from ConnMan.
You've nearly persuaded me. I'm still stuck on how I could alternatively implement the DNS serving of my-device.lan feature.

Since I've already done some work on this, I'll go ahead and send my patches, while I understand they are unlikely to be incorporated into the mainline code.
--
Regards,
Craig McQueen
Patrik Flykt
2015-10-19 07:55:37 UTC
Permalink
Hi,
Post by Alexandre Chataignon
This patch adds the possibility to enable a captive portal while connman
is in tethering mode by sending its own IP via the DNS proxy (which acts
as a DNS server more than a proxy in this case).
I needed this usage for an IoT device, in which the tethering mode is in
fact a hotspot mode. With a light http server and patched connman, it
offers the possibility to have a light and easy-to-use captive portal
for example to diagnostic device.
A captive portal is easier to implement by setting a NAT rule that
redirects incoming connections to a desired port on the local x.x.x.1
tethering interface. Once the captive portal stuff is cleared out, the
redirecting NAT rule for that host can be modified not to capture
connections. It's much simpler than having to rewrite DNS replies from
ConnMan. Your patch enforces captive portal implementation for each and
every device all the time, for sure other use cases include letting
hosts through that have completed the steps enforced by the captive
portal.

Cheers,

Patrik
Alexandre Chataignon
2015-10-19 08:26:46 UTC
Permalink
Post by Patrik Flykt
A captive portal is easier to implement by setting a NAT rule that
redirects incoming connections to a desired port on the local x.x.x.1
tethering interface. Once the captive portal stuff is cleared out, the
redirecting NAT rule for that host can be modified not to capture
connections. It's much simpler than having to rewrite DNS replies from
ConnMan. Your patch enforces captive portal implementation for each and
every device all the time, for sure other use cases include letting
hosts through that have completed the steps enforced by the captive
portal.
Hi,

This solution was the first I implemented (no need to modify connman
source code).

The problem is that in your case, DNS requests are answered by a remote
DNS server. However, as I said, the tethering mode here is in fact a «
hotspot » mode, i.e. we don't have internet connection on the other end.

So your solution will work if we want to access device via any IP
address (for exemple http://1.2.3.4) but not via a DNS request (for
example http://device.config or even http://www.google.fr), which is
much more practical.

Alexandre
Patrik Flykt
2015-10-19 09:40:19 UTC
Permalink
Hi,
Post by Alexandre Chataignon
This solution was the first I implemented (no need to modify connman
source code).
The problem is that in your case, DNS requests are answered by a remote
DNS server. However, as I said, the tethering mode here is in fact a «
hotspot » mode, i.e. we don't have internet connection on the other end.
So your solution will work if we want to access device via any IP
address (for exemple http://1.2.3.4) but not via a DNS request (for
example http://device.config or even http://www.google.fr), which is
much more practical.
I was thinking of any connection going to e.g. port 80 would be forced
to the locally used x.x.x.1 address on the tether interface. Like an
automatic proxy solution. This can then be refined to work only for a
subset of hosts, by looking up e.g. www.google.fr and setting up a
transparent proxy/captive portal only for those. Besides, using
http://1.2.3.4 would go through unnoticed, as it is already an IP
address :-). device.config is not a proper host name as it does not
exist and ConnMan should respond properly to DNS lookups...

That said, reading the contents of e.g. /run/connman/hosts and replying
with those to DNS requests sounds like a decent solution to me. But
still that would not capture http://216.34.181.45.

Cheers,

Patrik
Alexandre Chataignon
2015-10-19 09:57:50 UTC
Permalink
Post by Patrik Flykt
I was thinking of any connection going to e.g. port 80 would be forced
to the locally used x.x.x.1 address on the tether interface. Like an
automatic proxy solution. This can then be refined to work only for a
subset of hosts, by looking up e.g. www.google.fr and setting up a
transparent proxy/captive portal only for those. Besides, using
http://1.2.3.4 would go through unnoticed, as it is already an IP
address :-). device.config is not a proper host name as it does not
exist and ConnMan should respond properly to DNS lookups...
That said, reading the contents of e.g. /run/connman/hosts and replying
with those to DNS requests sounds like a decent solution to me.
Yeah, this solution could work, but this won't do the same…

Captive portal should capture *all* DNS requests. Sure, a good captive
portal should answer the correct IP and then capture IP traffic as you
said, but I don't have internet connectivity here, so I must lie and
answer a « false » DNS record (with a TTL=0 so the client shouldn't
cache it).

Your solution with a list of « captured domains » isn't a captive portal
per se, we don't want to have to restrict which domain or not should be
captured, as we just want that any smartphone/computer that connects to
our device will have the device's http page as soon as they open their
browser…
Post by Patrik Flykt
But still that would not capture http://216.34.181.45.
Yes, I also have an iptables rule to redirect all port 80 request to the
tether interface, but not done in connman for simplicity reasons. It can
look to write a patch to do it inside connman if you think that's relevant.
Post by Patrik Flykt
Cheers,
Patrik
Thanks,

Alexandre
Loading...