<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[⌘T Random Walk]]></title><description><![CDATA[Some notes]]></description><link>https://wat.im/</link><generator>Ghost 4.4</generator><lastBuildDate>Sun, 05 Apr 2026 19:41:06 GMT</lastBuildDate><atom:link href="https://wat.im/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Video Next Frame prediction]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h3 id="collecting-links">Collecting Links</h3>
<ul>
<li><a href="https://openreview.net/pdf?id=SkztZYiaF7">LMVP: Vido predictor with Leaked Motion Information</a></li>
<li><a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45984.pdf">Geometry-Based Bext Frame Prediction from Monocular Video</a></li>
<li><a href="https://openaccess.thecvf.com/content_CVPR_2019/papers/Kwon_Predicting_Future_Frames_Using_Retrospective_Cycle_GAN_CVPR_2019_paper.pdf">Predicting Future Frames using Retrospective Cycle GAN</a></li>
<li><a href="https://keras.io/examples/vision/conv_lstm/">Next-frame prediction with Conv-LSTM</a></li>
</ul>
<!--kg-card-end: markdown-->]]></description><link>https://wat.im/video-next-frame-prediction/</link><guid isPermaLink="false">60abf48a5d911c2d0fc95fe1</guid><dc:creator><![CDATA[Jeff Waller]]></dc:creator><pubDate>Mon, 24 May 2021 19:13:52 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="collecting-links">Collecting Links</h3>
<ul>
<li><a href="https://openreview.net/pdf?id=SkztZYiaF7">LMVP: Vido predictor with Leaked Motion Information</a></li>
<li><a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45984.pdf">Geometry-Based Bext Frame Prediction from Monocular Video</a></li>
<li><a href="https://openaccess.thecvf.com/content_CVPR_2019/papers/Kwon_Predicting_Future_Frames_Using_Retrospective_Cycle_GAN_CVPR_2019_paper.pdf">Predicting Future Frames using Retrospective Cycle GAN</a></li>
<li><a href="https://keras.io/examples/vision/conv_lstm/">Next-frame prediction with Conv-LSTM</a></li>
</ul>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Implementing Variational Autoencoders]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h3 id="collecting-links">Collecting links</h3>
<ul>
<li>Variational autoencoder <a href="https://keras.io/examples/generative/vae/">implementation in keras</a> with a custom training step</li>
<li><a href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/custom_training.ipynb#scrollTo=iuKuNXPORfqJ">Custom Training Using the Strategy Class</a>.</li>
<li>Withing a custom loss function, it&apos;s sometimes necessary to <a href="https://github.com/keras-team/keras/issues/3155">get the epoch number</a>.</li>
<li>Some reserch in determining, controlling the VAE loss function to avoid posterior collapse in <a href="https://arxiv.org/pdf/1910.00698.pdf">Re-balancing Variational Autoencoder</a></li></ul>]]></description><link>https://wat.im/implementing-variational-autoencoders/</link><guid isPermaLink="false">60abeca25d911c2d0fc95fa2</guid><dc:creator><![CDATA[Jeff Waller]]></dc:creator><pubDate>Mon, 24 May 2021 18:23:32 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="collecting-links">Collecting links</h3>
<ul>
<li>Variational autoencoder <a href="https://keras.io/examples/generative/vae/">implementation in keras</a> with a custom training step</li>
<li><a href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/custom_training.ipynb#scrollTo=iuKuNXPORfqJ">Custom Training Using the Strategy Class</a>.</li>
<li>Withing a custom loss function, it&apos;s sometimes necessary to <a href="https://github.com/keras-team/keras/issues/3155">get the epoch number</a>.</li>
<li>Some reserch in determining, controlling the VAE loss function to avoid posterior collapse in <a href="https://arxiv.org/pdf/1910.00698.pdf">Re-balancing Variational Autoencoder Loss for Molecule Sequence Generation</a>.</li>
<li><a href="https://adriangcoder.medium.com/a-review-of-dropout-as-applied-to-rnns-72e79ecd5b7b">A review of Dropout as applied to RNNs</a></li>
<li><a href="https://machinelearningmastery.com/use-dropout-lstm-networks-time-series-forecasting/">Dropout with LSTM Networks for Time Series Forecasting</a></li>
</ul>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Ubuntu 18 Errata]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h2 id="overcomingerrornovideomodeactivated">Overcoming error: no video mode activated</h2>
<p>A bug certainly.  There are at least 2 reasons.  It&apos;s connected to the use of <code>GRUB_TIMEOUT_STYLE</code> part of the fix is to unset this as described <a href="https://pov.es/linux/ubuntu/ubuntu-fixing-the-error-no-video-mode-activated-message-on-boot/">here</a></p>
<pre><code>#GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT_STYLE=menu
GRUB_TIMEOUT=10
</code></pre>
<p>Additionally, there</p>]]></description><link>https://wat.im/ubuntu-18-errata/</link><guid isPermaLink="false">5ea1d575ad22300c2d34d8c8</guid><dc:creator><![CDATA[Jeff Waller]]></dc:creator><pubDate>Thu, 23 Apr 2020 18:05:48 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="overcomingerrornovideomodeactivated">Overcoming error: no video mode activated</h2>
<p>A bug certainly.  There are at least 2 reasons.  It&apos;s connected to the use of <code>GRUB_TIMEOUT_STYLE</code> part of the fix is to unset this as described <a href="https://pov.es/linux/ubuntu/ubuntu-fixing-the-error-no-video-mode-activated-message-on-boot/">here</a></p>
<pre><code>#GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT_STYLE=menu
GRUB_TIMEOUT=10
</code></pre>
<p>Additionally, there is a scan of menu fonts that happen during boot, but if the boot partition is different than the /usr partition, then those fonts will not be found causing an error.  A workaround is to copy those fonts to <code>/boot</code> as described in <a href="https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/699802">comment #24 of the bug thread</a></p>
<pre><code>   for dir in &quot;${pkgdatadir}&quot; &quot;`echo &apos;/boot/grub&apos; | sed &quot;s,//*,/,g&quot;`&quot; /usr/share/grub ; do
        for basename in unicode unifont ascii; do
            path=&quot;${dir}/${basename}.pf2&quot;
            if is_path_readable_by_grub &quot;${path}&quot; &gt; /dev/null ; then
                font_path=&quot;${path}&quot;
            else
                continue
            fi
            break 2
        done
   done
</code></pre>
<p>After performing these updates, run <code>update-grub</code></p>
<h2 id="vfiosetup">VFIO setup</h2>
<p>Setting up a VM using VFIO allows for near full rate use of the GPU and a responsive desktop; desireable.  The important aspects include</p>
<ul>
<li>hardware that support IOMMU</li>
<li>a GPU that can be dedicated to the guest</li>
<li>a configuration that assigs resources, especially by restricting access to the prospective GPU from the host video driver.</li>
</ul>
<h3 id="links">Links</h3>
<ul>
<li>Ubuntu setup discussed but unfortunatly is system wide rather than via boot option in <a href="https://forum.level1techs.com/t/ubuntu-18-04-vfio-pcie-passthrough/127011">Ubuntu 18.04 &#x2013; VFIO PCIe Passthrough</a></li>
<li>Arch-linux analog setup and docs in <a href="https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#With_vfio-pci_built_into_the_kernel">PCI passthrough via OVMF</a> and <a href="https://www.reddit.com/r/archlinux/comments/acwv4n/can_i_load_the_vfiopci_module_using_a_kernel/">Can I load the vfio-pci module using a kernel parameter? reddit question</a></li>
</ul>
<h3 id="exampleetcgrubd40_customentry">Example /etc/grub.d/40_custom entry</h3>
<p>Assign and reserve a GPU via boot option.  Assumes a specific partitioning setup; LVM and a root LV in a vg0 VG, AMD hardware; those specifics are not required, the important part is turning on iommu, and assigning vfio ids as kernel arguments.</p>
<pre><code>menuentry &quot;VFIO&quot; {
  root=&quot;hd0,gpt2&quot;
  linux	/vmlinuz-4.15.0-96-generic root=/dev/mapper/vg0-root ro amd_iommu=on iommu=pt kvm_amd.npt=1 kvm_amd.avic=1 vfio-pci.ids=10de:1b80,10de:10f0
  initrd /initrd.img-4.15.0-96-generic
}
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Keeping up with the TensorFlow GPU training environment.]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>There are at least 5 dependencies here.</p>
<ol>
<li>Software dependency on TensorFlow version.</li>
<li>TensorFlow dependency on python version.</li>
<li>TensorFlow dependency on <code>cuda</code> and <code>cuDNN</code></li>
<li>Cuda dependency on NVidia drivers.</li>
<li>NVidia driver dependency on NVidia hardware.</li>
</ol>
<h3 id="softwaredependency">Software Dependency</h3>
<p>Starting with an (opensource) software install, the typical method calls for installation of various</p>]]></description><link>https://wat.im/tensorflow-gpu-training/</link><guid isPermaLink="false">5e72c7cead22300c2d34d8ae</guid><dc:creator><![CDATA[Jeff Waller]]></dc:creator><pubDate>Thu, 19 Mar 2020 02:22:26 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>There are at least 5 dependencies here.</p>
<ol>
<li>Software dependency on TensorFlow version.</li>
<li>TensorFlow dependency on python version.</li>
<li>TensorFlow dependency on <code>cuda</code> and <code>cuDNN</code></li>
<li>Cuda dependency on NVidia drivers.</li>
<li>NVidia driver dependency on NVidia hardware.</li>
</ol>
<h3 id="softwaredependency">Software Dependency</h3>
<p>Starting with an (opensource) software install, the typical method calls for installation of various support frameworks in particular TensorFlow.  Assuming <code>pip</code> rather than <code>anaconda</code></p>
<pre><code>pip install tensorflow_gpu
</code></pre>
<p>Usually the required version of cuda is that version that was the most up-to-date version when reference tensorflow is compiled.</p>
<h3 id="tensorflowpythonrequirements">Tensorflow python requirements</h3>
<p>*Python version 3.8+ is supported only by TensorFlow 2.0+<br>
*Python version 3.7 is supported by TensorFlow 1.15.x</p>
<h3 id="cudalibraryrequirementsoftensorflow">Cuda Library Requirements of TensorFlow</h3>
<p>*TensorFlow 2.0 requires cuda 10.2<br>
*TensorFlow 1.15.x requires cuda 10.0<br>
*TensorFlow 1.12.x requires cuda 9.2</p>
<h3 id="nnsupportlibrarycudnndepencies">NN Support Library (cuDNN) Depencies</h3>
<p>Likewise there is a <code>cuDNN</code>, <code>cuda</code> set of dependencies. Typically there will be multiple compatible versions, each one is a separate download <a href="https://developer.nvidia.com/rdp/cudnn-download">available from the NVidia developer website</a> (see below); additionally so are the <a href="https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html">install instructions</a></p>
<p>*cuda version 10.2 is only compatible with cuDNN version 7.6.5<br>
*cuda versions 10.1, 10.0, 9.2, 9.0 support cuDNN versions 7.6.5, 7.6.4, 7.6.3, 7.6.2, 7.6.1 7.6.0, 7.5.1, 7.5.0<br>
*cuda versions 10.0, 9.2, 9.0 support cuDNN versions 7.4.2, 7.4.1, 7.3.1, 7.3.0<br>
*cuda versions 9.2, 9.0, 8.0 support cuDNN versions 7.1.4, 7.1.3, 7.1.2<br>
*cuda versions 9.1, 9.0, 8.0 supports cuDNN version 7.0.5<br>
*cuda version 9.0 supports cuDNN version 7.0.4<br>
*cuda versions 8.0, 7.5 support cuDNN versions 6.0, 5.1, 5.0<br>
*cuda 7.0 and later supports cuDNN versions 4 and 3<br>
*cuda 6.5 and later supports cuDNN version 2</p>
<h3 id="cudanvidiadrivercompatibilityanddriverhardwarecompatibility">Cuda NVidia driver compatibility and driver-hardware compatibility</h3>
<p>Finally, different versions of the cuda library require different versions of the NVidia drivers while each driver requires a minimum hardware architecture.  Generally newer libraries require newer hardware as they leverage newer hardware capabilities.  Refer to the <a href="https://docs.nvidia.com/deploy/cuda-compatibility/index.html">NVidia cuda compatibility page</a> for details.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Notes on the use of autoencoders]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Construction an autoencoder (A) is a unsupervised learning NN technique in which an input X is mapped to itself X-&gt;A-&gt;X.  Importantly there are multiple layers in this NN which contains in the interior a &quot;bottleneck&quot; which has a capacity smaller than the input and</p>]]></description><link>https://wat.im/notes-for-variational-autoencoder/</link><guid isPermaLink="false">5e55e90cad22300c2d34d86a</guid><dc:creator><![CDATA[Jeff Waller]]></dc:creator><pubDate>Wed, 26 Feb 2020 04:17:01 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Construction an autoencoder (A) is a unsupervised learning NN technique in which an input X is mapped to itself X-&gt;A-&gt;X.  Importantly there are multiple layers in this NN which contains in the interior a &quot;bottleneck&quot; which has a capacity smaller than the input and output layers.  The purpose of the bottleneck is to form a set of latent variables encoding the salient part of the samples while filtering out the noise.  Since 1990s, there have been introduced several varients.</p>
<h3 id="moreflexiblethanpca">More flexible than PCA</h3>
<p>Like PCA, autoencoders can be use for dimentionality reduction, but have the added benefit that datapoints are not necessarily compressed into a hyperplane as the encoding is allowed to be non-linear.</p>
<h3 id="surveyandoverview">Survey and Overview</h3>
<ul>
<li>Definition <a href="https://en.wikipedia.org/wiki/Autoencoder">Wikipedia</a></li>
<li>Intro via anomaly detection <a href="https://towardsdatascience.com/anomaly-detection-with-autoencoder-b4cdce4866a6">Anomaly Detection with Autoencoders Made Easy</a></li>
<li><a href="https://towardsdatascience.com/applied-deep-learning-part-3-autoencoders-1c083af4d798">Applied Deep Learning Part 3 (Autoencoders)</a></li>
</ul>
<h3 id="variationalautoencoders">Variational Autoencoders</h3>
<ul>
<li>Explanation and theory, old-ish example uses Caffe <a href="https://arxiv.org/abs/1606.05908">Tutorial on Variational Autoencoders</a></li>
<li>Generation of of sample data using variational autoencoders with example in python-keras <a href="https://towardsdatascience.com/generating-new-faces-with-variational-autoencoders-d13cfcb5f0a8">Generating new faces with Variational Autoencoders</a>.</li>
<li>Variational Autoencoder example <a href="https://towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf">Intuitively Understanding Variational Autoencoders</a></li>
</ul>
<h4 id="kldivergence">KL Divergence</h4>
<ul>
<li><a href="https://www.machinecurve.com/index.php/2019/12/21/how-to-use-kullback-leibler-divergence-kl-divergence-with-keras/">How to use Kullback-Leibler divergence (KL divergence) with Keras?</a></li>
<li><a href="https://www.countbayesie.com/blog/2017/5/9/kullback-leibler-divergence-explained">Kullback-Leibler Divergence Explained</a></li>
<li>As of Tensorflow 1.15, KL divergence is implemented directly <a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/keras/losses/KLDivergence">see Keras docs</a></li>
</ul>
<h3 id="disentangledvariationalautoencoders">Disentangled Variational Autoencoders</h3>
<ul>
<li>Introduction to disentangled VAE <a href="https://towardsdatascience.com/what-a-disentangled-net-we-weave-representation-learning-in-vaes-pt-1-9e5dbc205bd1">What a disentangled Net We Weave: Representation Learning in VAEs (Pt.1)</a>.</li>
<li><a href="https://towardsdatascience.com/with-great-power-comes-poor-latent-codes-representation-learning-in-vaes-pt-2-57403690e92b">With Great Power Comes Poor Latent Codes: Representation Learning in VAEs (Pt. 2)</a>.</li>
<li>InfoVAE <a href="https://arxiv.org/abs/1706.02262">InfoVAE: Information Maximizing Variational Autoencoders<br>
</a></li>
</ul>
<h3 id="variations">Variations</h3>
<ul>
<li>Use of ConvLSTM <a href="https://towardsdatascience.com/prototyping-an-anomaly-detection-system-for-videos-step-by-step-using-lstm-convolutional-4e06b7dcdd29">Anomaly Detection in Videos using LSTM Convolutional Autoencoder</a></li>
<li>Example of convolutional autoencoders <a href="https://towardsdatascience.com/convolutional-autoencoders-for-image-noise-reduction-32fce9fc1763">Convolutional Autoencoders for Image Noise Reduction</a></li>
<li><a href="https://stackoverflow.com/questions/52435274/how-to-use-keras-merge-layer-for-autoencoder-with-two-ouput">multi input</a></li>
<li>SOM-VAE described in <a href="https://arxiv.org/abs/1806.02199">SOM-VAE: Interpretable Discrete Representation Learning on Time Series\</a>.</li>
</ul>
<h3 id="related">Related</h3>
<ul>
<li>Video frame prediction with encoding and decoding layers and a bottleneck <a href="https://www.youtube.com/watch?v=MjFpgyWH-pk&amp;t=575s">London TensorFlow Meetup - Convolutional LSTMs for video prediction</a></li>
</ul>
<h3 id="howto">Howto</h3>
<p>A variational autoencoder <a href="https://keras.io/examples/generative/vae/">implementation in keras</a> with a custom training step, also applicable is <a href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/custom_training.ipynb#scrollTo=iuKuNXPORfqJ">Custom Training Using the Stragegy Class</a>.  Withing a custom loss function, it&apos;s sometimes necessary to <a href="https://github.com/keras-team/keras/issues/3155">get the epoch number</a>.</p>
<p>Handling the tradeoff reconstruction error and KL divergence example in <a href="https://stats.stackexchange.com/questions/341954/balancing-reconstruction-vs-kl-loss-variational-autoencoder">Balancing Reconstruction vs KL Loss Variational Autoencoder</a> on Cross Validated stack exchange.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>