After upgrading to Kubuntu 18.10, my virtual machine was no longer able to connect to the Internet. It turned out that this problem had already been reported on Fedora:
Once I changed the firewalld configuration (etc/firewalld/firewalld.conf) to use the iptables backend as suggested in that bug report, the virtual machine could connect to the Internet again.
I upgraded a desktop machine from Debian Squeeze to Debian Wheezy. After the upgrade, the power button was mapped to a suspend action, instead of a shutdown as in Debian Squeeze.
In order to map the power button to a shutdown action, I had to do the following;
'nothing' by means of dconf (for all users)
2) Add a new section
[org.gnome.settings-daemon.plugins.power] with the line
Calomel SSL Validation
I’ve come across several posts on the Net that states that concatenation of MP4 files by means of MP4Box (from gpac) will make VLC play two videos concurrently instead of playing the videos in sequence.
I had the same problem, but once I had all the source MP4 videos generated with the exact same audio and video encoding parameters (that meant adding audio to the MP4 videos that did not have audio) things worked out fine.
Actually, I thought I had used the same parameters for all my MP4 files, but during the conversion from an AVCHD video to an MP4 by means of ffmpeg, the frame rate somehow went from 25 fps to 26.09 fps (you can check this using
ffmpeg -i somevideo.mp4 on all your MP4 videos). Once I added “-r 25” as the last option to ffmpeg, the issue was resolved.
Think about the following scenario:
Alice needs to upload a document with sensitive information to https://www.disk.net/. The Web server servicing this URL is run by Bob and uses TLS server and client authentication to ensure that Alice’s browser and Bob’s server can authenticate each other. Of course, both parties (client and server) need to have a certificate installed. In todays prevalent CA-based PKI, these certificates must be issued by CA’s that both entities trust. Bob’s server uses Alice’s client certificate for automatic login on Alice’s account.
Another server, run by Eve, is servicing https://www.disc.net/ and has been deliberately crafted to mimic the service at https://www.disk.net/ in order to fool people into uploading documents at the wrong service.
Eve’s server also has a certificate issued by a mutually trusted CA, and will similarly use Alice’s client certificate to do a login on a (perhaps dynamically created) account. So if Alice by mistake connects to Eve’s server instead of Bob’s server, it is likely that she will not notice her mistake.
Why doesn’t TLS in todays browsers prevent this scenario? Because
- todays web browsers by default accepts any server certificate issued by one of the root CA’s that they have been configured to trust. Todays browsers fully rely on CA’s to not issue certificates to servers that have been set up for phishing purposes. I doubt that todays CA’s live up to this responsibility.
- client certificates can in some cases actually assist Eve’s server in creating a login experience that makes Eve’s services more closely resemble those of Bob.
In other words, the problem is not in the TLS protocol or the implementation of it. Rather, the problem is in the PKI that underlies the integration of TLS in todays browsers.
I’ve recently had a deja-vu while reviewing a proprietary application level protocol; the protocol wasn’t robust with respect to loss of the connection at the transport layer. The reason for this was a design flaw I’ve seen before.
The proprietary protocol relied on a positive acknowledgement transport protocol (like TCP/IP) to ensure that unacknowledged packets got retransmitted. However, lost packets are typically not retransmitted indefinitely. TCP/IP protocol stacks, for example, only try to retransmit unacknowledged packets a limited number of times before aborting the connection.
So, if a connection breaks, how much data can a sender assume that the reciever actual got? Not much, actually. Lack of a positive acknowledgement of a packet could mean that the receiver never got the packet. However, it could also mean that the receiver actually got the packet, but the acknowledgement was somehow lost on its way back to the sender. A broken connection may therefore potentially leave the state of the two peers out-of-synch. When the transport layers become ready to exchange data again, the application layers must somehow be able to continue their protocol without knowing each others precise state.
The best strategy to cope with this problem depends on the problem domain/protocol. Retransmission of the last PDU may be one option, but it will require that the protocol can cope with duplicate PDU’s. Another solution (but often more involved) would be to have the application level protocol let peers probe each others state before continuing normal protocol flow.
Much the same conclusions can be drawn for negative acknowledgement protocols, by the way.
I’ve just released the first version of gnc2ods – an XSLT that converts a gnucash XML file to an OpenDocument 1.1 spreadsheet. See http://www.ohloh.net/p/gnc2ods for more information.
I’ve just released version 0.2.0 of tds2dbg. The new version is now able to insert debug information for public symbols as well as source code references (file and line number) in the dbg files. See http://www.ohloh.net/p/tds2dbg for more information.
Some time ago, I published tds2dbg – a small command line tool that converts Borland debug symbol files (*.tds) to Microsoft debug symbol files (*.dbg). See more on http://tds2dbg.sourceforge.net/
The tool was created while doing consultancy for one of my customers, but the customer was not too keen on maintaining it after I left. We therefore agreed to make it open source.